Cube texturing in opengl3 - java

Just doing my computer graphics assignment - put texture (600x400 bitmap with different numbers) on a cube to form a proper dice. I managed to do it using "classical" texture mapping: creating verices and adding corresponding texture coordinates to it:
int arrayindex = 0;
float xpos = 0.0f;
float xposEnd = 0.32f;
float ypos = 0.0f;
float yposEnd = 0.49f;
int count = 0;
void quad( int a, int b, int c, int d ) {
colors[arrayindex] = vertex_colors[a];
points[arrayindex] = vertices[a];
tex_coord[arrayindex] = new Point2(xpos, ypos);
arrayindex++;
colors[arrayindex] = vertex_colors[b];
points[arrayindex] = vertices[b];
tex_coord[arrayindex] = new Point2(xpos, yposEnd);
arrayindex++;
colors[arrayindex] = vertex_colors[c];
points[arrayindex] = vertices[c];
tex_coord[arrayindex] = new Point2(xposEnd, yposEnd);
arrayindex++;
colors[arrayindex] = vertex_colors[a];
points[arrayindex] = vertices[a];
tex_coord[arrayindex] = new Point2(xpos, ypos);
arrayindex++;
colors[arrayindex] = vertex_colors[c];
points[arrayindex] = vertices[c];
tex_coord[arrayindex] = new Point2(xposEnd, yposEnd);
arrayindex++;
colors[arrayindex] = vertex_colors[d];
points[arrayindex] = vertices[d];
tex_coord[arrayindex] = new Point2(xposEnd, ypos);
arrayindex++;
xpos = xpos + 0.34f;
xposEnd = xpos + 0.32f;
count++;
if (count == 3) {
xpos = 0.0f;
xposEnd = 0.33f;
ypos = 0.51f;
yposEnd = 1.0f;
}
}
void colorcube() {
quad( 1, 0, 3, 2 );
quad( 2, 3, 7, 6 );
quad( 3, 0, 4, 7 );
quad( 6, 5, 1, 2 );
quad( 5, 4, 0, 1 );
quad( 4, 5, 6, 7 );
pointsBuf = VectorMath.toBuffer(points);
colorsBuf = VectorMath.toBuffer(colors);
texcoord = VectorMath.toBuffer(tex_coord);
}
Passing all this stuff to shaders and just putting it up together.
But reviewing the slides i noticed this method is supposed to be "pre opengl3".
Is there any other method to do this stuff?
In lecture examples i noticed putting it up together in the vertex shader but it was just for a simple 2d plane, not a 3d cube
tex_coords = vec2(vPosition.x+0.5,vPosition.z+0.5);
and later passed to fragment shader to create the texture.

But reviewing the slides i noticed this method is supposed to be "pre opengl3".
I think your slides refer to the old immediate mode. In immediate mode each vertex and its attributes are sent to OpenGL by calling functions that immediately draw them.
In your code however you're initializing a buffer with vertex data. This buffer may then passed as a whole to OpenGL and drawn as a batch by only a single OpenGL call. I wrote "may" because there's not a single OpenGL call in your question.

Related

Raytracing from scratch

I made a 3D-renderer that parses .obj files (ASCII) and projects them on to a 2d plane.
At first glance the projection model seems to be fine except one thing.
I noticed that the projection model looks a bit odd:
[1]: https://i.stack.imgur.com/iaLOu.png
All polygons are being drawn including the ones in the back of the model, which I
should definitely not be able to see.
I made a quick recherche in Wikipedia to see what this is about and I think I found something called "Sichtbarkeitsproblem" (Hidden-surface determination).
(DE): https://de.wikipedia.org/wiki/Sichtbarkeitsproblem
(EN):
https://en.wikipedia.org/wiki/Hidden-surface_determination
The article mentions that this is a common thing in computer graphics and that there are many different ways to perform a "Verdeckungsberechnung" (cover up calculation).
It mentions things like using a z-Buffer and Raytracing.
Now I don't really know a lot about Raytracing but It seems to be quite applicable as I later want to add a light source.
I am not sure how Raytracing works but If I just send out rays in an angle that matches the slope from the camera to every pixel on screen and check which polygon hits it first I would only end up having some polygons completely missing only due to one vertex being potentially covered.
How do other Raytracers work? Do they remove the entire polygon when not getting a hit? Remove only one or more vertecies? (which I belief would cause massive distortion in shape) or do they just render all the Polygons and arrange them in a way that they are sorted by the minimum distance to the camera? (I guess this would made it very bad at performance)
Please help me implement this into my code or give me a hint, it would mean a lot to me.
My code is as followed, and the link for the projection model (see Image no. 1) I put here:
https://drive.google.com/file/d/10dpjcL2d2QB15qqTSu5p6kQ534hNOzCz/view?usp=sharing
(Note that the 3d-model and code must be in same folder in order to work)
// 12.11.2022
// Siehe Rotation Matrix in Wikipedia
// View Space: The world space vertex positions relative to the view of the camera
/* Die Verdeckungsberechnung ist zum korrekten Rendern einer 3D-Szene notwendig, weil Oberflächen,
die für den Betrachter nicht sichtbar sind, auch nicht dargestellt werden sollten
*/
// -> https://de.wikipedia.org/wiki/Sichtbarkeitsproblem
// TODO: Raytracing/Verdeckungsberechnung
// TODO: Texture Mapping
import java.util.Arrays;
import java.awt.Robot;
import java.nio.ByteBuffer;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.ArrayList;
byte b[];
int amount = 0;
String lines[];
PVector[][] vertices;
int[] faces;
float a = 0;
PVector cam, cam_angle, cam_move, cam_speed;
float angle = 0.0;
void setup() {
size(800,600);
frameRate(60);
noCursor();
cam = new PVector(0, 100, -500);
cam_angle = new PVector(0, 0, 0);
cam_move = new PVector(0, 0, 0);
cam_speed = new PVector(50, 50, 50);
lines = loadStrings("UM2_SkullPile28mm.obj");
println("File loaded. Now scanning contents...");
println();
Pattern numbers = Pattern.compile("(-?\\d+)");
ArrayList<PVector> vertices_ = new ArrayList<PVector>();
ArrayList<ArrayList> faces_ = new ArrayList<ArrayList>();
int parsed_lines = 0;
for(String i:lines) {
switch(i.charAt(0)) {
// Find faces
case 'f':
ArrayList<Integer> values = new ArrayList<Integer>();
for(Matcher m = numbers.matcher(i); m.find(); values.add(Integer.parseInt(m.group())));
faces_.add(values);
break;
// Find Vectors
case 'v':
String s[] = i.trim().split("\\s+");
vertices_.add(new PVector(Float.parseFloat(s[1])*20, Float.parseFloat(s[2])*20, Float.parseFloat(s[3])*20));
break;
};
if(++parsed_lines % (lines.length/6) == 0 || parsed_lines == lines.length) println((int)(map(parsed_lines, 0, lines.length, 0, 100)), "%");
}
println();
println("Done. Found", vertices_.size(), "Vertices and", faces_.size(), "faces");
int i=0;
vertices = new PVector[faces_.size()][];
for(ArrayList<Integer> f_:faces_) {
vertices[i] = new PVector[f_.size()];
int j = 0;
for(int f: f_) {
PVector v = vertices_.get(f-1);
vertices[i][j] = Rotate3d_x(v, -90);
j++;
}
i++;
}
}
PVector Rotate2d(PVector p, float a) {
// a = angle
float[][] m2 = {
{cos(a), -sin(a)},
{sin(a), cos(a)}
};
float[][] rotated = matmul(m2, new float[][] {
{ p.x },
{ p.y }
});
return new PVector(rotated[0][0], rotated[1][0]);
}
PVector Rotate3d(PVector p, float[][] m2) {
float[][] rotated = matmul(m2, new float[][] {
{ p.x },
{ p.y },
{ p.z }
});
return new PVector(rotated[0][0], rotated[1][0], rotated[2][0]);
}
PVector Rotate3d_x(PVector p, float a) {
return Rotate3d(p,
new float[][] {
{1, 0, 0},
{0, cos(a), -sin(a)},
{0, sin(a), cos(a)}
});
};
PVector Rotate3d_y(PVector p, float a) {
return Rotate3d(p,
new float[][] {
{cos(a), 0, sin(a)},
{0, 1, 0},
{-sin(a), 0, cos(a)}
});
}
PVector Rotate3d_z(PVector p, float a) {
return Rotate3d(p,
new float[][] {
{cos(a), -sin(a), 0},
{sin(a), cos(a), 0},
{0, 0, 1}
});
}
PVector Rotate3d(PVector p, PVector a) {
return Rotate3d_z( Rotate3d_y(Rotate3d_x(p, a.x), a.y), a.z );
}
// Matrixmultiplikation
float[][] matmul(float[][] m1, float[][] m2) {
int cols_m1 = m1.length,
rows_m1 = m1[0].length;
int cols_m2 = m2.length,
rows_m2 = m2[0].length;
try {
if (rows_m1 != cols_m2) throw new Exception("Rows of m1 must match Columns of m2!");
}
catch(Exception e) {
println(e);
}
float[][] res = new float[cols_m2][rows_m2];
for (int c=0; c < cols_m1; c++) {
for (int r2=0; r2 < rows_m2; r2++) {
float sum = 0;
float[] buf = new float[rows_m1];
// Multiply rows of m1 with columns of m2 and store in buf
for (int r=0; r < rows_m1; r++) {
buf[r] = m1[c][r]* m2[r][r2];
}
// Add up all entries into sum
for (float entry : buf) {
sum += entry;
}
res[c][r2] = sum;
}
}
return res;
}
PVector applyPerspective(PVector p) {
PVector d = applyViewTransform(p);
return applyPerspectiveTransform(d);
}
PVector applyViewTransform(PVector p) {
// c = camera position
// co = camera orientation / camera rotation
PVector c = cam;
PVector co = cam_angle;
// dx, dy, dz https://en.wikipedia.org/wiki/3D_projection : Mathematical Formula
float[][] dxyz = matmul(
matmul(new float[][]{
{1, 0, 0},
{0, cos(co.x), sin(co.x)},
{0, -sin(co.x), cos(co.x)}
}, new float[][]{
{cos(co.y), 0, -sin(co.y)},
{0, 1, 0},
{sin(co.y), 0, cos(co.y)}
}),
matmul(new float[][]{
{cos(co.z), sin(co.z), 0},
{-sin(co.z), cos(co.z), 0},
{0, 0, 1}
}, new float[][]{
{p.x - c.x},
{p.y - c.y},
{p.z - c.z},
}));
PVector d = new PVector(dxyz[0][0], dxyz[1][0], dxyz[2][0]);
return d;
}
PVector applyPerspectiveTransform(PVector d) {
// e = displays surface pos relative to camera pinhole c
PVector e = new PVector(0, 0, 300);
return new PVector((e.z / d.z) * d.x + e.x, (e.z / d.z) * d.y + e.y);
}
void draw() {
background(255);
translate(width/2, height/2);
scale(1,-1);
noStroke();
fill(0, 100, 0, 50);
PVector[][] points_view = new PVector[vertices.length][];
for(int i=0; i < vertices.length; i++) {
points_view[i] = new PVector[vertices[i].length];
for(int j=0; j < vertices[i].length; j++)
points_view[i][j] = applyViewTransform(Rotate3d_y(vertices[i][j], angle));
}
// The following snippet I got from: https://stackoverflow.com/questions/74443149/3d-projection-axis-inversion-problem-java-processing?noredirect=1#comment131433616_74443149
float nearPlane = 1.0;
for (int c = 0; c < points_view.length; c++) {
beginShape();
for (int r = 0; r < points_view[c].length-1; r++) {
// Alle Punkte verbinden
//if (i == a) continue;
PVector p0 = points_view[c][r];
PVector p1 = points_view[c][r+1];
if(p0.z < nearPlane && p1.z < nearPlane){ continue; };
if(p0.z >= nearPlane && p1.z < nearPlane)
p1 = PVector.lerp(p0, p1, (p0.z - nearPlane) / (p0.z - p1.z));
if(p0.z < nearPlane && p1.z >= nearPlane)
p0 = PVector.lerp(p1, p0, (p1.z - nearPlane) / (p1.z - p0.z));
// project
p0 = applyPerspectiveTransform(p0);
p1 = applyPerspectiveTransform(p1);
vertex(p0.x, p0.y);
vertex(p1.x, p1.y);
}
endShape();
}
}
Ray tracing doesn't determine whether or not a polygon is visible. It determines what point (if any) on what polygon is visible in a given direction.
As a simplification: rasterisation works by taking a set of geometry and for each one determining what pixels it affects. Ray tracing works by taking a set of pixels and, for each one determining what geometry is visible along that direction.
With rasterisation, there are many ways of making sure that polygons don't draw in the wrong order. One approach is to sort them by distance to the camera, but that doesn't work with polygons that overlap. The usual approach is to use a z-buffer: when a polygon is rasterised, calculate the distance to the camera in each pixel, and only update the buffer if the new value is nearer to the camera than the old value.
With ray tracing, each ray returns the nearest hit location along a direction, along with what it hit. Since each pixel will only be visited once, you don't need to worry about triangles drawing on top of each other.
If you just want to project a piece of 3D geometry onto a plane, rasterisation will likely be much, much faster. At a very high level, do this:
create an RGBA buffer of size X*Y
create a z buffer of size X*Y and fill it with 'inf'
for each triangle:
project the triangle onto the projection plane
for each pixel the triangle might affect:
calculate distance from camera to the corresponding position on the triangle
if the distance is lower than the current value in the z buffer:
replace the value in the RGBA and z buffers with the new values

Understanding stride and offset when interleaving attributes in a vertex buffer

I don't seem to be able to wrap my head around interleaving vertex attributes.
I'm trying to pass 3 attributes to my compute shader: position, velocity, and the number of times it has bounced off something. The code worked just fine before I added the nbr_bounce attribute. Now the data seems to not be aligned as I'm imagining it.
Creates the interleaved list of floats:
class ParticleSystem {
FloatList particleAttribList = new FloatList();
float[] particlesBuffer;
FloatBuffer fbParticles;
int numOfParticles;
ShaderProgram shaderProgram;
ComputeProgram computeProgram;
ParticleSystem(int count) {
numOfParticles = count;
for (int i=0; i<count; i++) {
Particle p = new Particle();
p.pos.x = random(-1, 1);
p.pos.y = random(-1, 1);
p.vel.x = 0.01;
p.vel.y = 0.01;
particleAttribList.append(p.pos.x);
particleAttribList.append(p.pos.y);
particleAttribList.append(p.vel.x);
particleAttribList.append(p.vel.y);
particleAttribList.append(p.nbr_bounces);
}
particlesBuffer = new float[particleAttribList.size()];
for (int i = 0; i<particlesBuffer.length; i++) {
particlesBuffer[i] = particleAttribList.get(i);
}
fbParticles = Buffers.newDirectFloatBuffer(particlesBuffer);
shaderProgram = new ShaderProgram(gl, "vert.glsl", "frag.glsl");
computeProgram = new ComputeProgram(gl, "comp.glsl", fbParticles);
}
Passes the list of floats to the Shader:
ComputeProgram(GL4 gl, String compute, FloatBuffer verticesFB) {
this.gl = gl;
//Load and Compile the Compute Shader
int compute_shader = shaderHelper.createAndCompileShader(gl, GL4.GL_COMPUTE_SHADER, compute);
compute_program = gl.glCreateProgram();
gl.glAttachShader(compute_program, compute_shader);
gl.glLinkProgram(compute_program);
gl.glGenBuffers(1, vbo, 0);
gl.glBindBuffer(GL4.GL_ARRAY_BUFFER, vbo[0]);
gl.glBufferData(GL4.GL_ARRAY_BUFFER, verticesFB.limit()*4, verticesFB, GL4.GL_DYNAMIC_DRAW);
gl.glEnableVertexAttribArray(0);
gl.glEnableVertexAttribArray(1);
gl.glEnableVertexAttribArray(2);
// Since the Particle struct has 2 vec2 variables + 1 int, then the stride is 16 + 4 = 20
// position attribute (no offset)
gl.glVertexAttribPointer(0, 2, GL4.GL_FLOAT, false, 20, 0);
// velocity attribute (with (2*4)=8 offset)
gl.glVertexAttribPointer(1, 2, GL4.GL_FLOAT, false, 20, 8);
// nbr_bounces (with (2*4 + 2*4)=16 offset)
gl.glVertexAttribPointer(2, 1, GL4.GL_FLOAT, false, 20, 16);
ssbo = vbo[0];
gl.glBindBufferBase(GL4.GL_SHADER_STORAGE_BUFFER, 0, ssbo);
}
Compute Shader:
#version 430
struct Particle{
vec2 pos;
vec2 vel;
float nbr_bounces;
};
layout(std430, binding = 0) buffer particlesBuffer
{
Particle particles[];
};
layout(local_size_x = 1024, local_size_y = 1, local_size_z = 1) in;
void main()
{
uint i = gl_GlobalInvocationID.x;
vec2 tmpPos = particles[i].pos + particles[i].vel;
if(abs(tmpPos.x) >= 1.0){
particles[i].vel.x = -1.0 * particles[i].vel.x;
particles[i].nbr_bounces += 1.0;
}
if(abs(tmpPos.y) >= 1.0){
particles[i].vel.y = -1.0 * particles[i].vel.y;
particles[i].nbr_bounces += 1.0;
}
particles[i].pos += particles[i].vel;
}

In android how do I count the number of lines in a image using openCV

This is what I tried.I got the output as a grayscale image in img2 of an Imageview object.
The problem is as lines.cols() considers everything as a line.
I want the count to be exactly the number of larger lines as shown in the 1stpic (I mean the lines that seperates the parking lot,in which the car can ocupy) My output image Can anyone guide me how to get the exact count of parking lines.I have used openCV version 2.4.I have been working on this for the past 2 days
public String getCount() {
Bitmap bitmap = BitmapFactory.decodeResource(getApplicationContext().getResources(), R.drawable.park);
mat = new Mat();
edges = new Mat();
Mat mRgba = new Mat(612, 816, CvType.CV_8UC1);
Mat lines = new Mat();
Utils.bitmapToMat(bitmap, mat);
Imgproc.Canny(mat, edges, 50, 90);
int threshold = 50;
int minLineSize = 20;
int lineGap = 20;
Imgproc.HoughLinesP(edges, lines, 1, Math.PI / 180, threshold, minLineSize, lineGap);
int count = lines.cols();
int coun= lines.rows();
System.out.println("count = " + count);
System.out.println("coun = " + coun);
String cou = String.valueOf(count);
for (int x = 0; x < lines.cols(); x++) {
double[] vec = lines.get(0, x);
double x1 = vec[0],
y1 = vec[1],
x2 = vec[2],
y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
Core.line(mRgba, start, end, new Scalar(255, 0, 0), 3);
}
Bitmap bmp = Bitmap.createBitmap(mRgba.cols(), mRgba.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mRgba, bmp);
bitmap = bmp;
Drawable d = new BitmapDrawable(Resources.getSystem(), bitmap);
img2.setImageDrawable(d);
return cou;
}
You should modify one of the many answers regarding counting by OpenCV where kind of detection varies for every distinct case. You should make your own model for parking lines. Check some of these approaches/detectors Haar cascade classifier, latent SVM or Bag of Words.
You could also modify some of the answers that works for something else like this answer below for coins, only you should search for shape of parking lines instead of coins:
http://answers.opencv.org/question/36111/how-to-count-number-of-coins-in-android-opencv/

LibGDX: boundsInFrustrum and BoundingBox not working as expected

I am loading a box2d scene from a json file. This scene contains a fixture marking the bounding box that the camera is allowed to travel in. Using this mechanism works fine for the lower and left bounds, yet fails completely for the upper and right bounds, which is rather odd.
Here is the part that loads the bounding box from the file:
PolygonShape shape = ((PolygonShape) fixture.getShape());
Vector2 vertex = new Vector2();
float boundLeft = world.startX, boundRight = world.startX, boundUp = world.startY, boundLow = world.startY; // The location of the camera as initial value
for (int i = 0; i < shape.getVertexCount(); i++) { // Itarate over each vertex in the fixture and set the boundary values
shape.getVertex(i, vertex);
vertex.add(body.getPosition());
boundLeft = Math.min(vertex.x, boundLeft);
boundLow = Math.min(vertex.y, boundLow);
boundRight = Math.max(vertex.x, boundRight);
boundUp = Math.max(vertex.y, boundUp);
}
// Build the bounding boxes with enough thickness to prevent tunneling on fast pans
world.boundLeft = new BoundingBox(new Vector3(boundLeft - 5, boundLow - 5, 0).scl(RenderingSystem.PPM), new Vector3(boundLeft, boundUp + 5, 0).scl(RenderingSystem.PPM));
world.boundRight = new BoundingBox(new Vector3(boundRight, boundLow - 5, 0).scl(RenderingSystem.PPM), new Vector3(boundRight + 5, boundUp + 5, 0).scl(RenderingSystem.PPM));
world.boundUp = new BoundingBox(new Vector3(boundLeft - 5, boundUp, 0).scl(RenderingSystem.PPM), new Vector3(boundRight + 5, boundUp + 5, 0).scl(RenderingSystem.PPM));
world.boundLow = new BoundingBox(new Vector3(boundLeft - 5, boundLow - 5, 0).scl(RenderingSystem.PPM), new Vector3(boundRight + 5, boundLow, 0).scl(RenderingSystem.PPM));
// world is a class containing some properties, including these BoundingBoxes
// RenderingSystem.PPM is the amount of pixels per metre, in this case 64
And the following part is called when the camera is panned around:
public void pan(float x, float y) {
Vector3 current = new Vector3(camera.position);
camera.translate(-x, y);
camera.update(true);
if (camera.frustum.boundsInFrustum(world.boundLeft) || camera.frustum.boundsInFrustum(world.boundRight)) {
camera.position.x = current.x; // Broke bounds on x axis, set camera back to old x
camera.update();
}
if (camera.frustum.boundsInFrustum(world.boundLow) || camera.frustum.boundsInFrustum(world.boundUp)) {
camera.position.y = current.y; // Broke bounds on y axis, set camera back to old y
camera.update();
}
game.batch.setProjectionMatrix(camera.combined);
}
Well, I figured it out. Guess what my world.startx and world.startY were defined as? That's right, they were in screen coordinates:
world.startX = start.getPosition().x * RenderingSystem.PPM;
world.startY = start.getPosition().y * RenderingSystem.PPM;
This was causing the Math.max to always pick the world.startX and world.startY as these values were absolutely massive in comparison.

How can I iterate over android.graphics.Path segments?

I have an android Path object (created from text: paint.getTextPath(someString, 0, someString.length(), 0f, 0f, myPathObject);)
How can I iterate over path object segments ("move to", "line to", "quad to", etc...) like with PathIterator in awt?
Old question but I wanted it to have the answer
Look at android.graphics.PathMeasure API 1
float[] tmpPos = new float[2];
float[] tmpTan = new float[2];
PathMeasure measure = new PathMeasure();
measure.setPath(path, true);
do {
float dist = measure.getLength();
for (float p = 0; p < dist; p += 1) {
measure.getPosTan(p, tmpPos, tmpTan);
float x = tmpPos[0], y = tmpPos[1];
float nx = tmpTan[0], ny = tmpTan[1];
// do your own stuff
}
} while (measure.nextContour());

Categories

Resources