I am using LibGDX and have an ArrayList of multiple points which I want to connect. I am aware there are several methods with ShapeRenderer that work, however im running a SpriteBatch at the same time so what do I do now to draw a line with two Vectors. (If it exists, I'd also like the function that draws multiple lines at once with an Àrray or Vector2 as a parameter, though it isn't a problem as otherwise I'd manage with a for-loop probably).
I am also aware I can use Pixmaps but they don't seem to work correctly. Here is my attempt:
// point1 and point2 are of type Vector2
Pixmap pixmap = new Pixmap(point2.x - point1.x, 2, Pixmap.Format.RGBA8888);
pixmap.setColor(Color.WHITE);
pixmap.drawLine(point1.x, point1.y, point2.x, point2.y);
In response to a possible solution that involves using ShapeRenderer at the same time, this problem arises (the second image uses the points with pixmaps, the first the ShapeRenderer with lines)
The code used for the first image is the following:
for(int i = 1; i < dotPositions.size(); i++) {
sr.line(dotPositions.get(i-1), dotPositions.get(i));
}
The code used for the second image is the following:
Pixmap pixmap = new Pixmap(2, 2, Pixmap.Format.RGBA8888);
pixmap.setColor(Color.WHITE);
pixmap.fillCircle(2, 2, 2);
Texture texture = new Texture(pixmap);
for(int i = 1; i < dotPositions.size(); i++) {
batch.draw(texture, dotPositions.get(i).x, dotPositions.get(i).y);
}
In both cases dotPositions is an ArrayList<Vector2> with the same values.
If anyone in the future may be interested, I found the solution and all I had to do was use ShapeRender.setProjectionMatrix(cam.combined) to sync it with the SpriteBatch.
Related
I'm working on creating a voxel engine in LWJGL 3, I have all the basics down (chunks, mesh rendering, etc).
Now I'm working on adding physics using JBullet. This is my first time using JBullet directly, but I've used Bullet before in other 3D engines.
From here I gathered that all I needed to do to create a collision object the same shape as my mesh was the plug the vertices and indices into a TriangleIndexVertexArray and use that for a BvhTriangleMeshShape.
Here is my code:
float[] coords = mesh.getVertices();
int[] indices = mesh.getIndices();
if (indices.length > 0) {
IndexedMesh indexedMesh = new IndexedMesh();
indexedMesh.numTriangles = indices.length / 3;
indexedMesh.triangleIndexBase = ByteBuffer.allocateDirect(indices.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.triangleIndexBase.asIntBuffer().put(indices);
indexedMesh.triangleIndexStride = 3 * Float.BYTES;
indexedMesh.numVertices = coords.length / 3;
indexedMesh.vertexBase = ByteBuffer.allocateDirect(coords.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.vertexBase.asFloatBuffer().put(coords);
indexedMesh.vertexStride = 3 * Float.BYTES;
TriangleIndexVertexArray vertArray = new TriangleIndexVertexArray();
vertArray.addIndexedMesh(indexedMesh);
boolean useQuantizedAabbCompression = false;
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
CollisionShape collisionShape = meshShape;
CollisionObject colObject = new CollisionObject();
colObject.setCollisionShape(collisionShape);
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f)));
dynamicsWorld.addCollisionObject(colObject);
} else {
System.err.println("Failed to extract geometry from model. ");
}
I know that the vertices and indices are valid as I'm getting them here after drawing my mesh.
This seems to somewhat work, but when I try to drop a cube rigidbody onto the terrain, it seems to collide way above the terrain! (I know that the cube is setup correctly because if I remove the mesh collider it hits the base ground plane at y=0).
I thought maybe it was a scaling issue (although I don't see how that could be), so I tried changing:
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f))); to:
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 0.5f)));
But after changing the scale from 1 it acted like the mesh collider didn't exist.
It's hard to find any resources or code for JBullet surrounding mesh collision, and I've been working on this for almost 2 days, so I'm hoping maybe some of you people who have done it before can help me out :)
Update 1:
I created an implementation of the IDebugDrawer so I could draw the debug infomation in the scene.
To test it I ran it with just a basic ground plane and a falling cube. I noticed that when the cube is falling the aabb matches the cube size, but when it hits the floor the aabb becomes significantly larger then it was.
I'm going to assue that this is normal Bullet behavior due to collition bouncing, and look at that later as it doesn't effect my current problem.
I re-enabled the generation of the colliders from the chunk meshs, and saw this:
It appears that the aabb visualization of the chunk is a lot higher then the actual chunk (I know my y positioning of the overall collision object is correct).
I'm going to try to figure out if I can draw the actual collision mesh or not.
Update 2:
As far as I can see looking at the source, the meshof the colliders should be drawing in debug, so I'm not sure why it isn't.
I tried changing the Box rigidbody to a sphere, and it actually rolled across the top of the visualized aabb for the terrain collider. It just rolled flat though and didn't go hit or down where there where hills or dips in the terrain where, so it was obviously just rolling across the flat top of the aabb.
So after adding in the Debug Drawer, I was confused as to why the aabb was x2 larger then it should have been.
After spending hours trying little adjustments, I noticed something odd - there was a 0.25 gap between the collider and the edge of the chunk. I proceeded to zoom out and surprisingly noticed this:
There is an extera row and column of colliders? No that doesn't make sense, there should be 5x5 colliders to match the 5x5 chunks.
Then I counted blocks and realized that the colliders where spanning 64 blocks (my chunks are 32x32!).
I quickly realized that this was a scaling issue, and after adding
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
meshShape.setLocalScaling(new Vector3f(0.5f, 0.5f, 0.5f));
To scale the colliders down by half, everything fit and worked! My "sphere" rolled and came to a stop where there was a hill in the terrain like it should.
My full code for coverting an LWJGL mesh to a JBullet mesh collder is:
public void addMesh(org.joml.Vector3f position, Mesh mesh){
float[] coords = mesh.getVertices();
int[] indices = mesh.getIndices();
if (indices.length > 0) {
IndexedMesh indexedMesh = new IndexedMesh();
indexedMesh.numTriangles = indices.length / 3;
indexedMesh.triangleIndexBase = ByteBuffer.allocateDirect(indices.length*Integer.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.triangleIndexBase.rewind();
indexedMesh.triangleIndexBase.asIntBuffer().put(indices);
indexedMesh.triangleIndexStride = 3 * Integer.BYTES;
indexedMesh.numVertices = coords.length / 3;
indexedMesh.vertexBase = ByteBuffer.allocateDirect(coords.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.vertexBase.rewind();
indexedMesh.vertexBase.asFloatBuffer().put(coords);
indexedMesh.vertexStride = 3 * Float.BYTES;
TriangleIndexVertexArray vertArray = new TriangleIndexVertexArray();
vertArray.addIndexedMesh(indexedMesh);
boolean useQuantizedAabbCompression = false;
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
meshShape.setLocalScaling(new Vector3f(0.5f, 0.5f, 0.5f));
CollisionShape collisionShape = meshShape;
CollisionObject colObject = new CollisionObject();
colObject.setCollisionShape(collisionShape);
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f)));
dynamicsWorld.addCollisionObject(colObject);
} else {
System.err.println("Failed to extract geometry from model. ");
}
}
Update 1:
Even though the scaling was the fix for said prolem, it caused me to look deeper and realize that I mistakenly was using to block size (0.5f) for the mesh scaling factor in my mesh view matrix. Changing the scale to 1 like it should be fixed it.
Good night friends.
I'm having trouble drawing a fixed point on the screen when the screen is rotated. I used the method "rotateAround" from the position of the player.
It seems to me. I have to rotate this fixed point also from the position of the player. I use this stretch learned here in stackoverflow.
public void rotate(Vector3 position, Vector3 centerPoint){
this.cosTemp = MathUtils.cosDeg(this.anguloAtual);
this.senTemp = MathUtils.sinDeg(this.anguloAtual);
this.xTemp = centerPoint.x + ((position.x - centerPoint.x) * this.cosTemp) - ((position.y - centerPoint.y) * this.senTemp);
this.yTemp = centerPoint.y + ((position.y - centerPoint.y) * this.cosTemp) + ((position.x - centerPoint.x) * this.senTemp);
position.set(this.xTemp, this.yTemp, 0);
}
In the drawing that the player on the screen. I used the position of the player, then called "camera.project" then the method "rotate". The fixed point appears, however it is not exactly fixed.
I used the example of a fixed point slightly ahead of the player.
public void meDesenhar(SpriteBatch spriteBatch) {
spriteBatch.begin();
this.spritePlayer.setPosition(this.positionPlayer.x - (this.spritePlayer.getWidth() / 2),
this.positionPlayer.y - this.spritePlayer.getHeight() / 2);
this.spritePlayer.draw(spriteBatch);
spriteBatch.end();
originPosition.set(positionPlayer, 0);
fixedPosition.set(positionPlayer.x, positionPlayer.y + 10, 0);
cameraTemp.project(fixedPosition);
cameraTemp.project(originPosition);
cameraManagerTemp.rotate(fixedPosition, originPosition);
Debugagem.drawPointInScreen(Color.BLUE, fixedPosition);
}
My questions:
1 - I am doing something wrong, or just it is a result of rounding? I realized when debugging. The position of the player changed a little every rotation after the "camera.project". Example position (540, 320) turned (539.99, 320.013)
2 - I tried using and enjoying the SpriteBatch the draw method to perform the rotation however, could not make the rotation from the player. I would arrive at the same result.
3 - Can I use two cameras? Each camera would be a layer. A camera at the map and the player would be. The other for fixed point. It's viable? I could not find any example that works with more than one camera at the same time. Anyone know any examples please. I'm not talking about huds or cameras to stage.
Video follows.
https://www.youtube.com/watch?v=1Vg8haN5ULE
Thank you.
It can be result of rounding because its moving a pixel.
You can calculate rotation from the player but its not necessary.
Of course you can use multiple cameras in your game and you should also in this case.
Its few screenshot from my old projects that i used multiple cameras
As you can see you can even use different type of cameras like ortho and perspective both 2D and 3D.
Just create new camera like first one and change projection matrix
camrotate = new OrthographicCamera(540, 960);
//...
camfixed = new OrthographicCamera(540, 960);
//...
And in render method
batch.setProjectionMatrix(camrotate.combined);
batch.begin();
//draw in camrotate now
//...
//...
batch.end();
batch.setProjectionMatrix(camfixed.combined);
batch.begin();
//draw fixed elements now
//...
//...
batch.end();
//add one more camera if you need
Edit:
Change projection matrix outside of batch.begin()/end() otherwise the current batch will flushed.
The basic question here is: how to ALWAYS keep your sprites within the Fitviewport? How to keep a reference to the view in order to have the proper coordinates as to where to draw?
I'm trying to spawn enemies into the gameplay screen. But this is handled by a FitViewport, and enemies and even the player can move outside the FitViewport on certain screen resolutions. So far the problem seems to be in the Y axis.
The FitViewport is made like this:
gameCamera = new OrthographicCamera();
gameCamera.setToOrtho(false);
gameViewport = new FitViewport(MyGame.WORLD_WIDTH,MyGame.WORLD_HEIGHT,gameCamera);
gameViewport.setScreenBounds(0,0,MyGame.WORLD_WIDTH,MyGame.WORLD_HEIGHT);
Then the camera position gets updated like this at the resize() method:
gameViewport.update(width,height); //not used when using the virtual viewport in the render method.
gameCamera.position.set(player.position.x + 200,player.position.y, 0);
Then the update() method calls the Player's own update() method which includes these lines:
//POSITION UPDATE
if (this.position.x<0) this.position.x=0;
if (this.position.x>Gdx.graphics.getWidth() - width) this.position.x= Gdx.graphics.getWidth() - width;
if (this.position.y<0) this.position.y = 0;
if (this.position.y>PlayScreen.gameViewport.getScreenHeight() - height) this.position.y = PlayScreen.gameViewport.getScreenHeight()- height;
Notice for the X axis I'm still using Gdx.graphics dimensions because I'm yet to make it work with PlayScreen.gameViewport.getScreenHeight() (gameViewport has been set to static for this purpose).
Also on enemy spawn (the problem related here is that they spawn outside of the screen Y in terms of what I see) I have this code inside the update() method of the Screen implementing all these viewports:
//Alien Spawn
if (System.currentTimeMillis() - lastZSpawn >= SpawnTimer){
count++;
lastZSpawn= System.currentTimeMillis();
for (int i=0;i<count;i++){
int x = Gdx.graphics.getWidth();
int y = random.nextInt((int)gameViewport.getScreenHeight() - Alien.height);
if (entities.size()<6){
entities.add(new Alien(new Vector2(x,y),1, alienImages,(float)((0))));
}
}
}
Also using gameViewport.getScreenHeight() here cause Gdx.graphics wasnt giving the correct result (it gave me the same issue really).
The render() method is correctly implemented in terms of the batch and applying the viewport:
MyGame.batch.setProjectionMatrix(gameCamera.combined);
gameViewport.apply();
MyGame.batch.begin();
for (int i = entities.size()-1; i>=0;i--){
entities.get(i).render();
}
You should never change the position of your player or enemies when resizing, that's why a viewport is for, remove all the code that do that first, to make your viewport work as you expected you need to create a new instance of camera passing the new viewport width and height when you resize, i prefer to make my camera static so i can acess its atribbutes from everywhere i want, you should do something like this:
public static OrthographicCamera update(int width,int height){
instance = new OrthographicCamera(width, height);
instance.setToOrtho(false);
return instance;
}
The answer to my problem is posted by myself in another question of mine which was also driven by a confusion in the implementation of FitViewports and using WorldWidth and WorldHeight properties as coordinates of reference when drawing objects into the game, and also correctly setting the camera position taking these values into consideration aswell.
The answer is here even though its text and not code and its mainly what i already wrote in this very post. FitViewport doesnt scale properly Libgdx
Few days ago I figured out how to do some scrolling in LibGdx. Now I'm triying to do something related. I want to repeat the background. My scrolling follows a ship (Is an s[ace ship game). In the background there is a space photo loaded as a Texture. When the ship reach the end of the backgorund, It keeps going and there's no background anymore. I have read about wrap but I don't really understand How It works. I did that:
px=new Pixmap(Gdx.files.internal("fondo.jpg"));
background=new Texture(px);
background.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
And then, in my render method
spriteBatch.begin();
spriteBatch.draw(background,0,0,500,50);
drawShip();
spriteBatch.end();
Of course It doesn't work, It only draws the background once. I don't know how make this wrap method work. Any help?
SOLUTION
I figured It out. It's not a nice code but It works.
First I declare two Textures with the same image
bck1=new Texture(Gdx.files.internal("fondo.jpg"));
bck2=new Texture(Gdx.files.internal("fondo.jpg"));
Also I declare two variables like this to specify the X value of the position of each bck
int posXBck1=0,posXBck2=0;
Then I use that in Render()
public void calculoPosicionFondos(){
posXBck2=posXBck1+ANCHODEFONDO;
if(cam.position.x>=posXBck2+cam.viewportWidth/2){
posXBck1=posXBck2;
}
}
Where:
ANCHODEFONDO is the width of my background
Cam is an OtrhoCam.
So I said that if the cam is in bck2 (wich means that you can't see bck1 anymore) It change positions, giving bck1 de position of bck2 and, in the next render loop, recalculating bck2
Then just paint both bck in your render mode.
Like Teitus said, do not load your texture multiple times, ever! Anyway, you where on the right track with the wrapper:
texture.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
Now you can just use the draw method with the source location. The source location is the area you choose to draw on the texture.
batch.draw(texture, x, y, srcX, srcY, srcWidth, srcHeight)
To scroll your texture from right to left all you have to do is increase srcX incrementally. So create a int that increments in the update/render method.
int sourceX = 0;
//render() method
//Increment the variable where to draw from on the image.
sourceX += 10;
//Simply draw it using that variable in the srcX.
batch.draw(YourTexture, 0, 0, sourceX, 0, screenWidth, screenHeight);
Because you are wrapping the texture it will wrap/loop and scroll indefinitely. There might be a issue with the sourceX int if the game runs for a very long time because a int can only hold 2147483647. It takes a while but you can fix it by subtracting the image width each time the number goes over the total image width.
Don't to this, please:
bck1=new Texture(Gdx.files.internal("fondo.jpg"));
bck2=new Texture(Gdx.files.internal("fondo.jpg"));
That will load your big background texture twice. That's a complete waste. If you want to keep your solution at least do:
bck1=new Texture(Gdx.files.internal("fondo.jpg"));
bck2=bkg1;
Regarding the texture Wrapping. If your texture is 500px wide, and you draw a 500px sprite, you won't see any repetition. If you want it repeated 2 times, draw it 1000px wide with 0-2 texture coordinates.
I'm not sure how spriteBatch handles the call you posted, you could try that one, or may be use the overload that uses a texture region and set your region manually.
I see this is a pretty old question, but I think there is an easier way to accomplish background scrolling. Just use the Sprite class. Here is a snippet I use for layered background images that scroll from right to left.
public class LevelLayer
{
public float speedScalar = 1;
private List<Sprite> backgroundSprites = new ArrayList<Sprite>();
public LevelLayer()
{
}
public void addSpriteLayer(Texture texture, float startingPointX, float y, int repeats)
{
for (int k = 0; k < repeats; k++)
{
Sprite s = new Sprite(texture);
s.setX(startingPointX + (k*texture.getWidth()));
s.setY(y);
backgroundSprites.add(s);
}
}
public void render(SpriteBatch spriteBatch, float speed)
{
for (Sprite s : backgroundSprites)
{
float delta = s.getX() - (speed * speedScalar);
s.setX(delta);
s.draw(spriteBatch);
}
}
}
Then you can use the same texture or series of textures like so:
someLayer.addSpriteLayer(sideWalkTexture1, 0, 0, 15);
someLayer.addSpriteLayer(sideWalkTexture2, 15 * sideWalkTexture1.getWidth(), 0, 7);
I change background repeating sections randomly in code and make new ones or reset existing sets when they go off screen. All the layers go to a pool and get pulled randomly when a new one is needed.
SOLUTION
I figured It out. It's not a nice code but It works.
First I declare two Textures with the same image
bck1=new Texture(Gdx.files.internal("fondo.jpg"));
bck2=new Texture(Gdx.files.internal("fondo.jpg"));
Also I declare two variables like this to specify the X value of the position of each bck
int posXBck1=0,posXBck2=0;
Then I use that in Render()
public void calculoPosicionFondos(){
posXBck2=posXBck1+ANCHODEFONDO;
if(cam.position.x>=posXBck2+cam.viewportWidth/2){
posXBck1=posXBck2;
}
}
Where:
ANCHODEFONDO is the width of my background
Cam is an OtrhoCam.
So I said that if the cam is in bck2 (wich means that you can't see bck1 anymore) It change positions, giving bck1 de position of bck2 and, in the next render loop, recalculating bck2
Then just draw both bck in your render()
I'm trying to create a camera to move around a 3d space and am having some problems setting it up. I'm doing this is Java, and apparently using gluPerspective and gluLookAt together creates a conflict (the screen starts flickering like mad).
gluPerspective is set like this:
gl.glMatrixMode(GLMatrixFunc.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(50.0f, h, 1.0, 1000.0);
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
I then create a camera matrix, making use of eye coordinates, forward and up vectors (http://people.freedesktop.org/~idr/glu3/form_4.png) (lets assume the code for the camera is correct.
Lastly, before I draw any thing I have:
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glMultMatrixf(camera.matrix);
And then I call my drawing routines (which do some translation/rotation on their own by calling glRotatef and glTranslatef).
Without the call to glMultMatrixf the camera shows the items I need to see in the centre of the screen as it should. With glMulMatrixf however, all I get is a black screen. I tried using glLoadMatrixf instead and it didn't work either. Am I doing something wrong? Am I putting something out of place? If not, and this is how it should be done let me know and I'll post some of the camera code that might be creating the conflicts.
EDIT: Here is the camera matrix creation code:
private void createMatrix()
{
float[] f = new float[3]; //forward (centre-eye)
float[] s = new float[3]; //side (f x up)
float[] u = new float[3]; //'new up' (s x f)
for(int i=0;i<3;i++){
f[i] = centre[i]-eye[i];
}
f = Maths.normalize(f);
s = Maths.crossProduct(f,upVec);
u = Maths.crossProduct(s,f);
float[][] mtx = new float[4][4];
float[][] mtx2 = new float[4][4];
//initializing matrices to all 0s
for (int i = 0; i < mtx.length; i++) {
for (int j = 0; j < mtx[0].length; j++) {
mtx[i][j] = 0;
mtx2[i][j] = 0;
}
}
//mtx = [ [s] 0,[u] 0,[-f] 0, 0 0 0 1]
//mtx2 = [1 0 0 -eye(x), 0 1 0 -eye(y), 0 0 1 -eye(z), 0 0 0 1]
for(int i=0;i<3;i++){
mtx[0][i] = s[i];
mtx[1][i] = u[i];
mtx[2][i] = -f[i];
mtx2[i][3]=-eye[i];
mtx2[i][3]=-eye[i];
mtx2[i][3]=-eye[i];
}
mtx[3][3] = 1;
mtx2[0][0]=1;mtx2[1][1] = 1;mtx2[2][2] = 1;mtx2[3][3] = 1;
mtx = Maths.matrixMultiply(mtx,mtx2);
for(int i=0;i<4;i++){
for(int j=0;j<4;j++){
// this.mtx is a float[16] for glMultMatrixf
this.mtx[i*4+j] = mtx[i][j];
}
}
}
I'm hopping the error is somewhere in this piece of code, if not, I'll have a look at my maths functions to see whats going on..
EDIT2: Though I should mention that at least the initial vectors (eye,centre,up) are correct and do put teh camera where it should be (worked with gluLookAt but had teh flickering issue).
It might be simpler to use glRotatef, glTranslatef, and glFrustum to create the camera, although your math seems fine to me (just as long as UpVec is actually defined). In most of the 3D graphics that I have done, you didn't really have a defined object that you wanted to track. I went through various implementations of a 3D camera using gluLookAt before I finally settled on this.
Here is how I tend to define my cameras:
When I create or initialize my camera, I set up the projection matrix with glFrustum. You can use glPerspecive if you prefer:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(left, right, down, up, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
After I clear the color and depth buffers for a render pass, then I call
glLoadIdentity();
glRotated(orientation.x, 1.0, 0.0, 0.0);
glRotated(orientation.y, 0.0, 1.0, 0.0);
glRotated(orientation.z, 0.0, 0.0, 1.0);
glTranslatef(position.x, position.y, position.z);
To position and orient the camera. Initially, you set position and orientation both to {0}, then add or subtract from position when a key is pressed, and add or subtract from orientation.x and orientation.y when the mouse is moved... (I generally don't mess with orientation.z)
Cheers.
Fixed it kind of. The problem was using glMultMatrix(float[] matrix,int ?ofset?)... for some reason if I just use glMultMatrix(FloatBuffer matrix) it works fine..
There are some issues with the transformations I'm making but I should be able to deal with those... Thank you for your input though guys.