libgdx check model is clicked - java

I am using libgdx for easy 3D game, I need check model is clicked.
It is my code:
public int getObject (int screenX, int screenY) {
Ray ray = cam.getPickRay(screenX, screenY);
int result = -1;
float distance = -1;
for (int i = 0; i < rooms.size; ++i) {
final GameObject instance = rooms.get(i);
instance.transform.getTranslation(position);
position.add(instance.center);
final float len = ray.direction.dot(position.x-ray.origin.x, position.y-ray.origin.y, position.z-ray.origin.z);
if (len < 0f)
continue;
float dist2 = position.dst2(ray.origin.x+ray.direction.x*len, ray.origin.y+ray.direction.y*len, ray.origin.z+ray.direction.z*len);
if (distance >= 0f && dist2 > distance)
continue;
if (dist2 <= instance.radius * instance.radius ) {
result = i;
distance = dist2;
}
}
return result;
}
It it sometimes work.
Is is my model:
http://www6.zippyshare.com/v/97501566/file.html
What do I wrong?
Any help for me?
I am new with libgdx.
When was I press 1 it lights, but when I waas press 2, 1 lights too (instead 2)...

I didn't fully analyze your code nor do I know what It it sometimes work. actually means (e.g. in which circumstances doesn't it work?) but you're using a bounding sphere for detecting whether the object has been clicked or not.
Assuming your calculations are correct (as I said, I didn't check them in depth) you still can have false positives or negatives since the only shape which is perfectly represented by a bounding sphere is ... well... a sphere.
That might be the source for click detection to work "sometimes".
If that is the case and you want more accurate detection you should either use different bound volumes, bounding volumne hierarchies or a rendering based approach (i.e. render the object id into some buffer, which would allow for pixel perfect selection).
UPDATE:
From your post update it seems that bounding spheres are not the problem here, since they should not overlap, unless your data is wrong - which you should check/debug.
So the problem might actually lie in your calculations. From the documentation it looks like the ray you get is projected into the scene (i.e. into world space) so you'd need to transform your objects' center into worldspace as well.
You're currently only applying the position but ignore rotation and scale thus the resulting position might be wrong. I'm sure there's some built-in transform code, so instead of transforming manually you should use that. Please check the docs on how to do that.

Related

Issues with Raytracing triangles (orientation and coloring)

EDIT: I found out that all the pixels were upside down because of the difference between screen and world coordinates, so that is no longer a problem.
EDIT: After following a suggestion from #TheVee (using absolute values), my image got much better, but I'm still seeing issues with color.
I having a little trouble with ray-tracing triangles. This is a follow-up to my previous question about the same topic. The answers to that question made me realize that I needed to take a different approach. The new approach I took worked much better, but I'm seeing a couple of issues with my raytracer now:
There is one triangle that never renders in color (it is always black, even though it's color is supposed to be yellow).
Here is what I am expecting to see:
But here is what I am actually seeing:
Addressing debugging the first problem, even if I remove all other objects (including the blue triangle), the yellow triangle is always rendered black, so I don't believe that it is an issues with my shadow rays that I am sending out. I suspect that it has to do with the angle that the triangle/plane is at relative to the camera.
Here is my process for ray-tracing triangles which is based off of the process in this website.
Determine if the ray intersects the plane.
If it does, determine if the ray intersects inside of the triangle (using parametric coordinates).
Here is the code for determining if the ray hits the plane:
private Vector getPlaneIntersectionVector(Ray ray)
{
double epsilon = 0.00000001;
Vector w0 = ray.getOrigin().subtract(getB());
double numerator = -(getPlaneNormal().dotProduct(w0));
double denominator = getPlaneNormal().dotProduct(ray.getDirection());
//ray is parallel to triangle plane
if (Math.abs(denominator) < epsilon)
{
//ray lies in triangle plane
if (numerator == 0)
{
return null;
}
//ray is disjoint from plane
else
{
return null;
}
}
double intersectionDistance = numerator / denominator;
//intersectionDistance < 0 means the "intersection" is behind the ray (pointing away from plane), so not a real intersection
return (intersectionDistance >= 0) ? ray.getLocationWithMagnitude(intersectionDistance) : null;
}
And once I have determined that the ray intersects the plane, here is the code to determine if the ray is inside the triangle:
private boolean isIntersectionVectorInsideTriangle(Vector planeIntersectionVector)
{
//Get edges of triangle
Vector u = getU();
Vector v = getV();
//Pre-compute unique five dot-products
double uu = u.dotProduct(u);
double uv = u.dotProduct(v);
double vv = v.dotProduct(v);
Vector w = planeIntersectionVector.subtract(getB());
double wu = w.dotProduct(u);
double wv = w.dotProduct(v);
double denominator = (uv * uv) - (uu * vv);
//get and test parametric coordinates
double s = ((uv * wv) - (vv * wu)) / denominator;
if (s < 0 || s > 1)
{
return false;
}
double t = ((uv * wu) - (uu * wv)) / denominator;
if (t < 0 || (s + t) > 1)
{
return false;
}
return true;
}
Is think that I am having some issue with my coloring. I think that it has to do with the normals of the various triangles. Here is the equation I am considering when I am building my lighting model for spheres and triangles:
Now, here is the code that does this:
public Color calculateIlluminationModel(Vector normal, boolean isInShadow, Scene scene, Ray ray, Vector intersectionPoint)
{
//c = cr * ca + cr * cl * max(0, n \dot l)) + cl * cp * max(0, e \dot r)^p
Vector lightSourceColor = getColorVector(scene.getLightColor()); //cl
Vector diffuseReflectanceColor = getColorVector(getMaterialColor()); //cr
Vector ambientColor = getColorVector(scene.getAmbientLightColor()); //ca
Vector specularHighlightColor = getColorVector(getSpecularHighlight()); //cp
Vector directionToLight = scene.getDirectionToLight().normalize(); //l
double angleBetweenLightAndNormal = directionToLight.dotProduct(normal);
Vector reflectionVector = normal.multiply(2).multiply(angleBetweenLightAndNormal).subtract(directionToLight).normalize(); //r
double visibilityTerm = isInShadow ? 0 : 1;
Vector ambientTerm = diffuseReflectanceColor.multiply(ambientColor);
double lambertianComponent = Math.max(0, angleBetweenLightAndNormal);
Vector diffuseTerm = diffuseReflectanceColor.multiply(lightSourceColor).multiply(lambertianComponent).multiply(visibilityTerm);
double angleBetweenEyeAndReflection = scene.getLookFrom().dotProduct(reflectionVector);
angleBetweenEyeAndReflection = Math.max(0, angleBetweenEyeAndReflection);
double phongComponent = Math.pow(angleBetweenEyeAndReflection, getPhongConstant());
Vector phongTerm = lightSourceColor.multiply(specularHighlightColor).multiply(phongComponent).multiply(visibilityTerm);
return getVectorColor(ambientTerm.add(diffuseTerm).add(phongTerm));
}
I am seeing that the dot product between the normal and the light source is -1 for the yellow triangle, and about -.707 for the blue triangle, so I'm not sure if the normal being the wrong way is the problem. Regardless, when I added made sure the angle between the light and the normal was positive (Math.abs(directionToLight.dotProduct(normal));), it caused the opposite problem:
I suspect that it will be a small typo/bug, but I need another pair of eyes to spot what I couldn't.
Note: My triangles have vertices(a,b,c), and the edges (u,v) are computed using a-b and c-b respectively (also, those are used for calculating the plane/triangle normal). A Vector is made up of an (x,y,z) point, and a Ray is made up of a origin Vector and a normalized direction Vector.
Here is how I am calculating normals for all triangles:
private Vector getPlaneNormal()
{
Vector v1 = getU();
Vector v2 = getV();
return v1.crossProduct(v2).normalize();
}
Please let me know if I left out anything that you think is important for solving these issues.
EDIT: After help from #TheVee, this is what I have at then end:
There are still problems with z-buffering, And with phong highlights with the triangles, but the problem I was trying to solve here was fixed.
It is an usual problem in ray tracing of scenes including planar objects that we hit them from a wrong side. The formulas containing the dot product are presented with an inherent assumption that light is incident at the object from a direction to which the outer-facing normal is pointing. This can be true only for half the possible orientations of your triangle and you've been in bad luck to orient it with its normal facing away from the light.
Technically speaking, in a physical world your triangle would not have zero volume. It's composed of some layer of material which is just thin. On either side it has a proper normal that points outside. Assigning a single normal is a simplification that's fair to take because the two only differ in sign.
However, if we made a simplification we need to account for it. Having what technically is an inwards facing normal in our formulas gives negative dot products, which case they are not made for. It's like light was coming from the inside of the object or that it hit a surface could not possibly be in its way. That's why they give an erroneous result. The negative value will subtract light from other sources, and depending on the magnitude and implementation may result in darkening, full black, or numerical underflow.
But because we know the correct normal is either what we're using or its negative, we can simply fix the cases at once by taking a preventive absolute value where a positive dot product is implicitly assumed (in your code, that's angleBetweenLightAndNormal). Some libraries like OpenGL do that for you, and on top use the additional information (the sign) to choose between two different materials (front and back) you may provide if desired. Alternatively, they can be set to not draw the back faces for solid object at all because they will be overdrawn by front faces in solid objects anyway (known as face culling), saving about half of the numerical work.

Animating translation between two fixed points (Libgdx)

I'm making a 2d game in libgdx and I would like to know what the standard way of moving (translating between two known points) on the screen is.
On a button press, I am trying to animate a diagonal movement of a sprite between two points. I know the x and y coordinates of start and finish point. However I can't figure out the maths that determines where the texture should be in between on each call to render. At the moment my algorithm is sort of like:
textureProperty = new TextureProperty();
firstPtX = textureProperty.currentLocationX
firstPtY = textureProperty.currentLocationY
nextPtX = textureProperty.getNextLocationX()
nextPtX = textureProperty.getNextLocationX()
diffX = nextPtX - firstPtX
diffY = nextPtY - firstPtY
deltaX = diffX/speedFactor // Arbitrary, controlls speed of the translation
deltaX = diffX/speedFactor
renderLocX = textureProperty.renderLocX()
renderLocY = textureProperty.renderLocY()
if(textureProperty.getFirstPoint() != textureProperty.getNextPoint()){
animating = true
}
if (animating) {
newLocationX = renderLocX + deltaX
newLocationY = renderLocY + deltaY
textureProperty.setRenderPoint(renderLocX, renderLocY)
}
if (textureProperty.getRenderPoint() == textureProperty.getNextPoint()){
animating = false
textureProperty.setFirstPoint(textureProperty.getNextPoint())
}
batch.draw(texture, textureProperty.renderLocX(), textureProperty.renderLocY())
However, I can foresee a few issues with this code.
1) Since pixels are integers, if I divide that number by something that doesn't go evenly, it will round. 2) as a result of number 1, it will miss the target.
Also when I do test the animation, the objects moving from point1, miss by a long shot, which suggests something may be wrong with my maths.
Here is what I mean graphically:
Desired outcome:
Actual outcome:
Surely this is a standard problem. I welcome any suggestions.
Let's say you have start coordinates X1,Y1 and end coordinates X2,Y2. And let's say you have some variable p which holds percantage of passed path. So if p == 0 that means you are at X1,Y1 and if p == 100 that means you are at X2, Y2 and if 0<p<100 you are somewhere in between. In that case you can calculate current coordinates depending on p like:
X = X1 + ((X2 - X1)*p)/100;
Y = Y1 + ((Y2 - Y1)*p)/100;
So, you are not basing current coords on previous one, but you always calculate depending on start and end point and percentage of passed path.
First of all you need a Vector2 direction, giving the direction between the 2 points.
This Vector should be normalized, so that it's length is 1:
Vector2 dir = new Vector2(x2-x1,y2-y1).nor();
Then in the render method you need to move the object, which means you need to change it's position. You have the speed (given in distance/seconds), a normalized Vector, giving the direction, and the time since the last update.
So the new position can be calculated like this:
position.x += speed * delta * dir.x;
position.y += speed * delta * dir.y;
Now you only need to limit the position to the target position, so that you don't go to far:
boolean stop = false;
if (position.x >= target.x) {
position.x = target.x;
stop = true;
}
if (position.y >= target.y) {
position.y = target.y;
stop = true;
}
Now to the pixel-problem:
Do not use pixels! Using pixels will make your game resolution dependent.
Use Libgdx Viewport and Camera instead.
This alows you do calculate everything in you own world unit (for example meters) and Libgdx will convert it for you.
I didn't saw any big errors, tho' i saw some like you are comparing two objects using == and !=, But i suggest u to use a.equals(b) and !a.equals(b) like that. And secondly i found that your renderLock coords are always being set same in textureProperty.setRenderPoint(renderLocX, renderLocY) you are assigning the same back. Maybe you were supposed to use newLocation coords.
BTW Thanks for your code, i was searching Something that i got by you <3

Use normal coordinates for collision response? (java)

I've finally completed the collision detection, however the collision response is very glitchy (if you try you can go through walls) and that's mainly because I have no information about the collision angle!
I have camPos(x,y,z) coordinates of my camera aswell as the min and max values of model(minX,minY,minZ,maxX,maxY,maxZ).. with a simple test I check if the camPos is interecting the model boundaries:
if(cameraX > xMax || cameraX < xMin) {
collisionX = false;
collisionY = false;
collisionZ = false;
} else {
collisionX = true;
System.out.println("collisionX: "+ collisionX); // collision on x is true!
}
I have acess to all vertice positions and calculated the min max values of the object to create a BoundingBox.
In order to get the right direction in which I want to push the object I need to know in which direction the nearest face is pointing (left,right,forward,back?)
To find out the angle I thought I could use the normal coordinates which I also have acess to, since they indicate a 90 degree angle to the face, right?
console print: //all 'vn' values of cube.obj
xNormals:0.0
yNormals:0.0
zNormals:0.0
xNormals:0.0
yNormals:0.0
zNormals:0.0
xNormals:1.0
yNormals:1.0
zNormals:1.0
xNormals:0.0
yNormals:0.0
zNormals:0.0
xNormals:-1.0
yNormals:-1.0
zNormals:-1.0
xNormals:0.0
yNormals:0.0
zNormals:0.0
Basically I want to know how these normal coordinates have to be applied to the min and max values of the object so that I can define all faces of the BoundingBox for example: face A is defined by xMin_a and xMax_a and faces left, face B is defined by xMin_b and xMax_b and faces right and so on..
I hope it's a bit more understandable, it's quite hard to explain..
Okay I've figured out a way to do it, hope it will help someone else too: I found out how to calculate the faces of a bounding box, it was actually not even that hard and it had absolutely nothing to do with normals.. sorry for the confusion.
As shown in my answer you can calculate the boundingBox of objects with their min/max values.. but you can also define all six faces of the box by extending the check and use a boolean to find out if the respective face collides somewhere:
if(collisionX == true && collisionY == true) {
if(cameraZ+0.2 > zMin_list.get(posIndex)-0.2 && cameraZ+0.2 < zMin_list.get(posIndex)) {
faceA = true;
System.out.println("faceA = " +faceA + " face a is true!");
} else {
faceA = false;
}
}
You can do this for all six faces where A(Forward), B(Back), C(Left), D(Right), E(Up) and F(Down) will define where the collision direction should be:
If the collision is on Face A, the response will be moving backwards (on z axis minus).. That way you will never be able to glitch through a wall because it will always push you away from the collision area.

Tetris: Turning the pieces?

So I am making a tetris game and one of the problems I am running into is piece rotation. I know I can just hard code it but thats not the right way to do it. The way the system works is I have a 2d array of an object 'Tile' the 'Tile' object has x, y coords, boolean isActive, and color. The boolean isActive basically tells the computer which tiles are actually being used (Since tetris shapes are not perfect quadrilaterals).
Here is how I would make a shape in my system:
public static Tile[][] shapeJ() {
Tile[][] tile = new Tile[3][2];
for (int x = 0; x < 3; x++) {
for (int y = 0; y < 2; y++) {
tile[x][y] = new Tile(false, Color.blue);
}
}
tile[0][0].setActive(true);
tile[0][0].setX(0);
tile[0][0].setY(0);
tile[1][0].setActive(true);
tile[1][0].setX(50);
tile[1][0].setY(0);
tile[2][0].setActive(true);
tile[2][0].setX(100);
tile[2][0].setY(0);
tile[2][1].setActive(true);
tile[2][1].setX(100);
tile[2][1].setY(50);
return tile;
}
Now I need to rotate this object, I do not know how to do that without hard coding the positions. There has to be an algorithm for it. Can anyone offer some help?
A good way that I used when writing a tetris clone is to use rotational matrices:
http://en.wikipedia.org/wiki/Rotation_matrix
So the coordinates (x',y') of the point (x,y) after rotation are:
x' = x*cos(theta) - y*sin(theta);
y' = x*sin(theta) + y*cos(theta);
Where theta is the angle of rotation(+-90 degrees or +-PI/2 radians for the java functions that I know)
In this case the blocks are rotated around the origin (0, 0) so you either have to have the coordinates of the block in special "block space" that then gets transposed onto "field space" or you take away the offset of the block so that it is centered at the origin every iteration.
I hope that helps, I am happy to answer specific questions in the comments.

java movement in a 2d array

In the game i'm building, I have made a basic collision detection system.
My current method is explained below:
I workout where the player will be in the next step of the game:
double checkforx = x+vx;
double checkfory = y+vy;
I then check for a collision with blocks (1) in mapArray.
public static Boolean checkForMapCollisions(double character_x,double character_y){
//First find our position in the map so we can check for things...
int map_x = (int) Math.round((character_x-10)/20);
int map_y = (int) Math.round((character_y-10)/20);
//Now find out where our bottom corner is on the map
int map_ex = (int) Math.round((character_x+10)/20);
int map_ey = (int) Math.round((character_y+10)/20);
//Now check if there's anything in the way of our character being there...
try{
for(int y = map_y; y <= map_ey; y++){
for(int x = map_x; x <= map_ex; x++){
if (levelArray[y][x] == 1){
return true;
}
}
}
}catch (Exception e){
System.out.println("Player outside the map");
}
return false;
}
If true is returned {nothing}
If false is returned {Player physics}
I need the player to be able to land on a block and then be able to walk around but I cannot find and adequate tutorial for this.
Can someone give me an idea on how to run my collision detection and/or movement?
There are 2 parts to this question. Collision detection, meaning determining whether a volume is touching or intersecting another volume. The second is collision response. Collision response is the physics portion.
I'll cover collision detection here as that's primarily what you asked about.
Ddefine a class for the map like so:
int emptyTile = 0;
//this assumes level is not a ragged array.
public boolean inBounds(int x, int y){
return x>-1 && y>-1 && x<levelArray[0].length && y<levelArray.length;
}
public boolean checkForCollisions(Rectangle rectangle){
boolean wasCollision = false;
for(int x=0;x<rectangle.width && !wasCollision;x++){
int x2 = x+rectangle.x;
for(int y=0;y<rectangle.height && !wasCollision;y++){
int y2 = y+rectangle.y;
if(inBounds(x2,y2) && levelArray[y2][x2] != emptyTile){
//collision, notify listeners.
wasCollision=true;
}
}
}
}
Do not make your methods static. You probably want more than one instance of a level right? Static is for when you need to share state which remains constant across multiple instances of a class. Level data will surely not remain constant for every level.
Instead of passing in a coordinate, try passing in an entire rectangle. This rectangle will be the bounding box of your character (the bounding box is also sometimes referred to as AABB, which means Axis-aligned bounding box, just FYI in case you're reading tutorials online for this sort of thing.) Let your Sprite class decide what its bounding rectangle is, that's not the map class's responsibility. All the map should be used for is maybe rendering, and whether a rectangle is overlapping tiles which are not empty.
I am sorry for a very shitty explanation but here is my github code and it will help better.
https://github.com/Quillion/Engine
Just to explain what I do. I have character object (https://github.com/Quillion/Engine/blob/master/QMControls.java) and it has vectors and a boolean called standing. Every time boolean standing is false. Then we pass it to the engine to check for collision, if collision happens then standing is true and y vector is 0. As to x vector whenever you press any arrow keys you make the xvector of the object to whatever value you want. And in the update loop you displace the given box by the amount of speed.

Categories

Resources