LibGDX - Bullet - 3D collision detection not detecting collisions - java

I'm trying to do a 3D collision test with Bullet, which I would have expected to work fairly simply:
#Override
public void create(){
. . .
/*
* Collision
*/
collisionConfig=new btDefaultCollisionConfiguration();
dispatcher=new btCollisionDispatcher(collisionConfig);
broadphase=new btDbvtBroadphase();
world=new btCollisionWorld(dispatcher, broadphase, collisionConfig);
contactListener=new PiscesContactListener();
contactListener.enable();
/*
* Test stuff
*/
btSphereShape sphere=new btSphereShape(2.5f);
btCollisionObject object1=new btCollisionObject();
btCollisionObject object2=new btCollisionObject();
object1.setCollisionShape(sphere);
object2.setCollisionShape(sphere);
object1.setWorldTransform(new Matrix4(new Vector3(0f, 0f, 0f), new Quaternion(0f, 0f, 0f, 0f), new Vector3(1f, 1f, 1f)));
object2.setWorldTransform(new Matrix4(new Vector3(1f, 0f, 0f), new Quaternion(0f, 0f, 0f, 0f), new Vector3(1f, 1f, 1f)));
object1.setUserValue(0);
object2.setUserValue(1);
object1.setCollisionFlags(WorldObject.COLLISION_PRIMARY); // 1<<9
object2.setCollisionFlags(WorldObject.COLLISION_PRIMARY);
world.addCollisionObject(object1, WorldObject.COLLISION_PRIMARY, WorldObject.COLLISION_EVERYTHING); // -1
world.addCollisionObject(object2, WorldObject.COLLISION_PRIMARY, WorldObject.COLLISION_EVERYTHING);
. . .
}
#Override
public void render() {
. . .
world.performDiscreteCollisionDetection();
. . .
}
/*
* In a separate file
*/
public class PiscesContactListener extends ContactListener {
public boolean onContactAdded (int userValue0, int partId0, int index0, boolean match0, int userValue1, int partId1, int index1, boolean match1) {
System.out.println("Collision detected between "+userValue0+" and "+userValue1);
return true;
}
}
(I followed this guide, though I obviously deviated from it a little to try to make it even simpler.)
A 2.5-unit sphere at (0, 0, 0) and a 2.5-unit sphere at (1, 0, 0) should cause a collision, should they not? Nothing's appearing in the console window.
I somewhat suspect there's something fundamental that I'm forgetting, since I tried to do a raycast and that isn't working either.
It's probably worth mentioning that calling world.drawDebugWorld(); draws wireframes of all the objects where they would be expected to be, so now I'm suspecting the error lies in the contact listener or maybe the flags I'm using (though I haven't had any luck with any other collision flags). Am I missing something in the contact listener?
Also, I tried using a ContactCache instead of a ContactListener but that didn't work either. Most reading I can find on the matter talks about ContactListener so I'm probably better off using that.
I'm obviously forgetting something since other people are able to do this, can anyone point out what that might be?

You are mixing collision flags (btCollisionObject#setCollisionFlags(int *flags*)) and collision filtering (btCollisionWorld#addCollisionObject(object, short *group*, short *mask*)). These are two very different things.
You should not call the btCollisionObject#setCollisionFlags(int) method with anything other than the available flags (see here). The CF_CUSTOM_MATERIAL_CALLBACK collision flag must be set for the ContactListener#onContactAdded method to be called. So that's why your code doesn't work as you expected.
Note that in my tutorial you are referring to, this is explained as well:
public void spawn() {
...
obj.body.setUserValue(instances.size);
obj.body.setCollisionFlags(obj.body.getCollisionFlags() | btCollisionObject.CollisionFlags.CF_CUSTOM_MATERIAL_CALLBACK);
...
}
In the spawn method we set this value, using the setUserValue method, to the index of the object in the instances array. And we also inform Bullet that we want to receive collision events for this object by adding the CF_CUSTOM_MATERIAL_CALLBACK flag. This flag is required for the onContactAdded method to be called.
There is no reason to use 1<<11 instead of 1<<9 for collision filtering. You probably changed something else (e.g. the collision flag) as well and wrongfully assumed that this change in the collision filter caused it to start working.
Note that besides collision flags and collision filtering, there is also the (libgdx specific) contact callback filtering (explained in the second part of that tutorial). Keep in mind that those are three totally different and unrelated things.

Related

How to get the distance that a body has moved during a box2D world step?

I'm trying to implement linear interpolation and a fixed time step for my game loop. I'm using the libGDX engine and box2D. I'm attempting to find the amount the simulation moves my character's body during a world step like this:
old_pos = guyBody.getPosition();
world.step(STEP_TIME, VELOCITY_ITERATIONS, POSITION_ITERATIONS);
new_pos = guyBody.getPosition();
printLog(new_pos.x-old_pos.x);
This returns 0 each time. The simulation works fine, and the body definitely moves each step.
Additional code:
#Override
public void render(float delta) {
accumulator+=delta;
while (accumulator>=STEP_TIME){
accumulator-=STEP_TIME;
stepWorld();
}
alpha = accumulator/STEP_TIME;
update(delta);
//RENDER
}
private void stepWorld() {
old_pos = guyBody.getPosition();
old_angle = guyBody.getAngle() * MathUtils.radiansToDegrees;
world.step(STEP_TIME, VELOCITY_ITERATIONS, POSITION_ITERATIONS);
new_angle = guyBody.getAngle() * MathUtils.radiansToDegrees;
new_pos = guyBody.getPosition();
}
I'm attempting to use alpha to check how far I am in between physics steps so I can interpolate a Sprite's position.
Thanks!
Body's getPosition method is returning Vector reference - that means that you not copying it by value but only assign "pointer" on position object to old_pos/new_pos. However you are assigning it once before step and then after step all in all both variables keeps the same object with state after step already.
What you need to do is to copy position vector by value - to do this you can use Vector's cpy() method.
Your code should looks like
old_pos = guyBody.getPosition().cpy();
world.step(STEP_TIME, VELOCITY_ITERATIONS, POSITION_ITERATIONS);
new_pos = guyBody.getPosition().cpy();
printLog(new_pos.x-old_pos.x);
If you do not use y coordinate you should also consider keeping only x in float type variable to not copy whole object (however it should not really impact your performance).
While the accepted response does answer my question, I wanted to add some information I figured out while trying to get this to work that I wish I knew at the beginning of this.
If you're going to use a fixed timestep for your physics calculations (which you should), you should also interpolate(or extrapolate) a Sprite's position between physics steps. In my code, the screen is being rendered more often than the world is being stepped:
#Override
public void render(float delta) {
accumulator+=delta;
while (accumulator>=STEP_TIME){
accumulator-=STEP_TIME;
stepWorld();
}
alpha = accumulator/STEP_TIME;
update(delta);
//RENDER using alpha
}
To avoid a jittery rendering of moving objects, render Sprites or Textures at their positions, modified by alpha. Since alpha is the ratio of your accumulator to the step time, it will always be between 0 and 1.
You then need to find how much your body is moving during one step. This can be done with the accepted answer or using the body velocity:
newPos = oldPos + body.getLinearVelocity()*STEP_TIME*alpha
Then just render Sprite at the new position and you should see smooth movement with your fixed timestep at most frame rates.

libgdx Fixed point after camera.rotateAround

Good night friends.
I'm having trouble drawing a fixed point on the screen when the screen is rotated. I used the method "rotateAround" from the position of the player.
It seems to me. I have to rotate this fixed point also from the position of the player. I use this stretch learned here in stackoverflow.
public void rotate(Vector3 position, Vector3 centerPoint){
this.cosTemp = MathUtils.cosDeg(this.anguloAtual);
this.senTemp = MathUtils.sinDeg(this.anguloAtual);
this.xTemp = centerPoint.x + ((position.x - centerPoint.x) * this.cosTemp) - ((position.y - centerPoint.y) * this.senTemp);
this.yTemp = centerPoint.y + ((position.y - centerPoint.y) * this.cosTemp) + ((position.x - centerPoint.x) * this.senTemp);
position.set(this.xTemp, this.yTemp, 0);
}
In the drawing that the player on the screen. I used the position of the player, then called "camera.project" then the method "rotate". The fixed point appears, however it is not exactly fixed.
I used the example of a fixed point slightly ahead of the player.
public void meDesenhar(SpriteBatch spriteBatch) {
spriteBatch.begin();
this.spritePlayer.setPosition(this.positionPlayer.x - (this.spritePlayer.getWidth() / 2),
this.positionPlayer.y - this.spritePlayer.getHeight() / 2);
this.spritePlayer.draw(spriteBatch);
spriteBatch.end();
originPosition.set(positionPlayer, 0);
fixedPosition.set(positionPlayer.x, positionPlayer.y + 10, 0);
cameraTemp.project(fixedPosition);
cameraTemp.project(originPosition);
cameraManagerTemp.rotate(fixedPosition, originPosition);
Debugagem.drawPointInScreen(Color.BLUE, fixedPosition);
}
My questions:
1 - I am doing something wrong, or just it is a result of rounding? I realized when debugging. The position of the player changed a little every rotation after the "camera.project". Example position (540, 320) turned (539.99, 320.013)
2 - I tried using and enjoying the SpriteBatch the draw method to perform the rotation however, could not make the rotation from the player. I would arrive at the same result.
3 - Can I use two cameras? Each camera would be a layer. A camera at the map and the player would be. The other for fixed point. It's viable? I could not find any example that works with more than one camera at the same time. Anyone know any examples please. I'm not talking about huds or cameras to stage.
Video follows.
https://www.youtube.com/watch?v=1Vg8haN5ULE
Thank you.
It can be result of rounding because its moving a pixel.
You can calculate rotation from the player but its not necessary.
Of course you can use multiple cameras in your game and you should also in this case.
Its few screenshot from my old projects that i used multiple cameras
As you can see you can even use different type of cameras like ortho and perspective both 2D and 3D.
Just create new camera like first one and change projection matrix
camrotate = new OrthographicCamera(540, 960);
//...
camfixed = new OrthographicCamera(540, 960);
//...
And in render method
batch.setProjectionMatrix(camrotate.combined);
batch.begin();
//draw in camrotate now
//...
//...
batch.end();
batch.setProjectionMatrix(camfixed.combined);
batch.begin();
//draw fixed elements now
//...
//...
batch.end();
//add one more camera if you need
Edit:
Change projection matrix outside of batch.begin()/end() otherwise the current batch will flushed.

JOGL OpenGL only rendering under weird conditions

I'm currently trying to get a very simple program to work. It just displays a white cross on a black background. The problem is that the rendering of my cross is only working under strange conditions. These are all the conditions i figured out thus far:
The layout of the vertex shader position input has to be greater than 2
Any call to glBindVertexArray(0) is causing the cross not to render even after calling glBindVertexArray(array)
I have to call glUseProgram before every draw call
As you might see i have no idea anymore of what is acutally happening here. How do i fix this bug?
Here is the code:
int axesVBO;
int axesVAO;
int vert, frag;
int program;
#Override
public void display(GLAutoDrawable drawable) {
System.out.println("Render");
GL4 gl = drawable.getGL().getGL4();
gl.glClear(GL4.GL_COLOR_BUFFER_BIT | GL4.GL_DEPTH_BUFFER_BIT);
gl.glBindVertexArray(axesVAO);
gl.glUseProgram(program); //Doesnt work without
gl.glDrawArrays(GL4.GL_LINES, 0, 2);
gl.glDrawArrays(GL4.GL_LINES, 2, 2);
gl.glBindVertexArray(0); //After this line the cross isn't renderd anymore
}
#Override
public void dispose(GLAutoDrawable drawable) {
GL4 gl = drawable.getGL().getGL4();
gl.glDeleteBuffers(1, IntBuffer.wrap(new int[]{axesVBO}));
gl.glDeleteVertexArrays(1, IntBuffer.wrap(new int[]{axesVAO}));
gl.glDeleteProgram(program);
gl.glDeleteShader(vert);
gl.glDeleteShader(frag);
}
#Override
public void init(GLAutoDrawable drawable) {
System.out.println("Init");
GL4 gl = drawable.getGL().getGL4();
IntBuffer buffer = Buffers.newDirectIntBuffer(2);
gl.glGenBuffers(1, buffer);
axesVBO = buffer.get(0);
vert = gl.glCreateShader(GL4.GL_VERTEX_SHADER);
frag = gl.glCreateShader(GL4.GL_FRAGMENT_SHADER);
gl.glShaderSource(vert, 1, new String[]{"#version 410\n in vec2 pos;void main() {gl_Position = vec4(pos, 0, 1);}"}, null);
gl.glShaderSource(frag, 1, new String[]{"#version 410\n out vec4 FragColor;void main() {FragColor = vec4(1, 1, 1, 1);}"}, null);
gl.glCompileShader(vert);
gl.glCompileShader(frag);
if(GLUtils.getShaderiv(gl, vert, GL4.GL_COMPILE_STATUS) == GL.GL_FALSE) {
System.out.println("Vertex shader compilation failed:");
System.out.println(GLUtils.getShaderInfoLog(gl, vert));
} else {
System.out.println("Vertex shader compilation sucessfull");
}
if(GLUtils.getShaderiv(gl, frag, GL4.GL_COMPILE_STATUS) == GL.GL_FALSE) {
System.out.println("Fragment shader compilation failed:");
System.out.println(GLUtils.getShaderInfoLog(gl, frag));
} else {
System.out.println("Fragment shader compilation sucessfull");
}
program = gl.glCreateProgram();
gl.glAttachShader(program, vert);
gl.glAttachShader(program, frag);
gl.glBindAttribLocation(program, 2, "pos"); //Only works when location is > 2
gl.glLinkProgram(program);
if(GLUtils.getProgramiv(gl, program, GL4.GL_LINK_STATUS) == GL.GL_FALSE) {
System.out.println("Program linking failed:");
System.out.println(GLUtils.getProgramInfoLog(gl, program));
} else {
System.out.println("Program linking sucessfull");
}
gl.glBindBuffer(GL4.GL_ARRAY_BUFFER, axesVBO);
gl.glBufferData(GL4.GL_ARRAY_BUFFER, Float.BYTES * 8, FloatBuffer.wrap(new float[]{-1f, 0, 1f, 0, 0, 1f, 0, -1f}), GL4.GL_STATIC_DRAW);
gl.glUseProgram(program);
buffer.clear();
gl.glGenVertexArrays(1, buffer);
axesVAO = buffer.get();
gl.glBindVertexArray(axesVAO);
int pos = gl.glGetAttribLocation(program, "pos");
gl.glEnableVertexAttribArray(pos);
gl.glBindBuffer(GL4.GL_ARRAY_BUFFER, axesVBO);
gl.glVertexAttribPointer(pos, 2, GL4.GL_FLOAT, false, 0, 0);
//Commented out for testing reasons (doesnt work when active)
//gl.glBindVertexArray(0);
gl.glClearColor(0f, 0f, 0f, 1f);
}
The conditions you figured out look strange. Anyway in general, having a clean and simple code helps a lot to avoid nasty bugs. Start clean and simple and then built it up :)
Few considerations:
don't use int for vbo and vao, use directly direct buffers
don't need to declare globally vert and frag if they are gonna be used only in the init, declare them locally in the method instead
prefer generating direct buffers using the jogl utility GLBuffers.newDirect*Buffer(...)
prefer, at least at the begin, to use the jogl utility (ShaderCode.create and ShaderProgram) to compile your shaders, it offloads you from work and potential bugs and includes a deeper check on any step during the whole shader creation (sometimes even too much, but nowadays shaders are so fast to compile it doesn't matter)
if you have ARB_explicit_attrib_location, you can check with gl4.isExtensionAvailable("GL_ARB_explicit_attrib_location");, use it everywhere you can, it will avoid a lot of potential bugs and overhead with any kind of location (such as glBindAttribLocation and glGetAttribLocation)
better to pass a direct buffer to glBufferData so that jogl doesn't have to create it by itself underneath and you can keep trace of it to deallocate it
keep the init clean and readable. You are mixing a lot of stuff together. For example you generate the vbo at the begin, then you create the program, then you upload data to the vbo.
it makes no sense gl.glUseProgram(program); in the init, unless your idea is to bind it and leave it bound. Anyway, normally, program is part of the initialization phase before a rendering call, so better to move it in the display().
prefer glClearBuffer to glClear
gl.glDrawArrays(GL4.GL_LINES, 0, 2); has no utility because you are passing zero as the number of vertices
if you need inspiration, take a look of this Hello Triangle

Trying to draw a circle in LibGDX

this is really basic but I can't figure out what's going wrong. Basically I'm trying to draw a circle around a certain area of one of my objects. I've initialised a ShapeRenderer in the constructor (called srDebugCircle) and have this for loop in the render() method to draw every object.
for (GameObject object : levels.get(LEVEL_INDEX)) {
if (object.getType() == ObjectType.SWINGING_SPIKES) {
object.draw(batch);
srDebugCircle.begin(ShapeType.Filled);
srDebugCircle.circle(object.getxPos() + object.getWidth()/2, object.getyPos(), object.getWidth()/2);
srDebugCircle.setColor(Color.BLACK);
srDebugCircle.end();
}
if (object.getType() == ObjectType.COIN && (Coin) object).isVisible()) {
object.draw();
}
...
}
The problem is I only see like 4 out of 15 objects when I add the code for the circle. When I remove it / comment it it works as usual - however, in both cases, I can never see a black filled circle.
I'm specifically talking about this part:
srDebugCircle.begin(ShapeType.Filled);
srDebugCircle.circle(object.getxPos() + object.getWidth()/2, object.getyPos(), object.getWidth()/2);
srDebugCircle.setColor(Color.BLACK);
srDebugCircle.end();
Can anybody see why I'm having this problem?
An alternative to Springrbua’s answer is to draw using Pixmaps instead of ShapeRenderer. Switching between SpriteBatch and ShaperRenderer is an expensive operation and Pixmaps don’t require ending the SpriteBatch. Pixmap offers fewer draw methods than ShapeRenderer, but it does include drawing a filled circle.
Pixmap pixmap = new Pixmap(width, height, Pixmap.Format.RGBA8888);
pixmap.setColor(Color.BLACK);
pixmap.fillCircle(x, y, r);
Texture texture = new Texture(pixmap);
// render
batch.begin();
batch.draw(texture, x, y);
batch.end();
The problem is, that you have two Renderer/Batches running at the same time:
The SpriteBatch batch and the ShapeRenderer srDebugCircle.
This can result in strange behaivor.
To solve the problem, call end() for one Renderer/Batch before calling begin() for the other.
In your case it would look something like this:
object.draw(batch);
batch.end()
srDebugCircle.begin(ShapeType.Filled);
srDebugCircle.setColor(Color.BLACK); // Set Color before drawing
srDebugCircle.circle(object.getxPos() + object.getWidth()/2, object.getyPos(), object.getWidth()/2);
srDebugCircle.end();
Also note, that calling end() on a SpriteBatch calls flush() which should be called as rarely as possible. Therefore it might be a good idea to draw everything with the SpriteBatch and then draw all the ShapeRenderer things.

Inputting a Key during a Loop in Java applet for animation purposes

I'm learning how to make an Applet using Java in the form of a game. In the game, I have a character sprite drawn at the center and moves when the player presses w a s d.
It goes like this:
public game extends applet implements KeyListener {
int x, y;
URL url;
Image image;
public void init() {
x = getSize().width/2;
y = getSize().height/2;
url = new URL(getCodeBase());
image = getImage(url, "player.gif"); //take note that this is a still image
addKeyListener(this);
}
public void paint(Graphics g) {
g.drawImage(image, x, y, 32, 32, this); //the size of the image is 32x32
}
public void KeyPressed(arg0) {
char c = arg0.getKeyChar();
switch(c) {
case 'w':
y -= 10;
break;
/*And so on. You guys know how it works.*/
}
repaint();
}
My problem is, the character sprite seems dull when the user doesn't press anything. What I want to do is to make the Image an array of images and put a simple image animation by looping the array in paint like so:
public void paint(Graphics g) {
for(int i = 0; ; i++) {
g.drawImage(image[i], x, y, 32, 32, this);
if(i == image.size() - 1) { i = 0;}
}
}
However, if I do this, I won't be able to get anymore KeyEvents that would activate when the user wants to move. My question is this: How will I make it so that my character does an animation when the program is "idle" (i.e. the user isn't pressing anything) while still maintaining the capability to take in KeyEvents (e.g. moving when the player types in w, a, s, or d, and then continuing the idle animation after repainting)?
Thanks in advance.
PS. I'm still quite a beginner in Java so sorry if my code is not very nice. Any advice is welcome.
You need to make your application multithreaded so that the painting runs in a separate thread.
Otherwise you will have the painting blocked while waiting for the next key.
This is not a trivial change to your code though.
Perhaps you would give yourself a new class, say InterestingCharacter, which can cycle through any of N states (corresponding to your N images). You clearly can't let any paint method run infinitely, but if your InterestingCharacter could render itself in its current state you might be onto something with that. Maybe it will be enough that this InterestingCharacter knows what state it is in and then some other object manages the rendering. Would it be helpful if the InterestingCharacter could tell you that its state has changed and so needs to be rendered again? If so, you could implement the Observer pattern such that the character is observed and your game an observer.
I think the trick will be to break the problem down into a few classes that have appropriate responsibilities--ideally a class should have one responsibility and each of its methods should do one thing.
Just some ideas to help you move forward. Experiment with it and see how it goes. Hope it helps!

Categories

Resources