I am loading chunk values from a height map to create infinite terrain, but I'm having issues with the rendering of each mesh chunk. When I load a single 500x500 unit mesh, it is a smooth mesh. When I load a 5x5 of 100x100 meshes, it is a jumbled up mess.
I made the two different types of meshes save an image of the height of each mesh, and they both give a smooth height map with gradual value changes.
...But when I render them, this is what I see:
500x500 mesh
5x5 100x100 mesh
As you can see, the 5x5 mesh isn't at all aligned. The first three chunks 0:0, 0:1, and 1:0 seem to look correct, but the others are all different. All rotations in their transformation matrix is vec3(0,0,0), but they generate like this. Here is the code used to render them:
public void render() {
shader.start();
shader.loadViewMatrix(Maths.createViewMatrix(engine.getPlayer()));
shader.loadLight(engine.getManagers().getLightManager().getLights().get(0));
for(Chunk chunk : engine.getManagers().getTerrainManager().getLoadedChunks().values()) {
RawModel model = chunk.getModel();
bindChunk(chunk);
loadShaderUniforms(chunk);
GL11.glDrawElements(GL11.GL_TRIANGLES, model.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);
unbindChunk();
}
shader.stop();
}
private void bindChunk(Chunk chunk) {
RawModel rawModel = chunk.getModel();
GL30.glBindVertexArray(rawModel.getVaoID());
GL20.glEnableVertexAttribArray(0); // position
GL20.glEnableVertexAttribArray(1); // textureCoordinates
GL20.glEnableVertexAttribArray(2); // normal
ModelTexture texture = chunk.getTexture();
GL13.glActiveTexture(GL13.GL_TEXTURE0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture.getTextureID());
}
private void unbindChunk() {
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL20.glDisableVertexAttribArray(2);
}
private void loadShaderUniforms(Chunk chunk) {
Matrix4f transformation = Maths.createTransformationMatrix(chunk.getLocation().getPosition(),
chunk.getLocation().getRotation(), 1, true);
System.out.println(chunk.getLocation().getRotation());
shader.loadTransformationMatrix(transformation);
shader.setShowHeightMap(chunk.shouldShowHeightMap());
}
I'm not familiar with the details of how LWJGL uses VAOs, so it might be a simple issue I'm forgetting. Any help would be appreciated as I've spent several days just narrowing down the issue to a rendering problem instead of a problem with the perlin noise/mesh generation.
I discovered the issue! There was a flaw in how I created the mesh which caused it to render the triangles flipped over. I discovered this by turning off culling and observing it from both sides and moving the three vertex coordinates around. Fixing the vertex face (e.g. swapping it from a clockwise to counter-clockwise and then reversing the X and Z coordinates) fixed it.
Related
I am using Processing and am copying the pixels[] array pixel by pixel in a for loop to a PGraphics object.
I am interested in using pushMatrix() and popMatrix() along with some transformations, but I cannot find any info on how the translate(), rotate(), and scale() functions affect how the pixels[] array is organized.
Also, in the info I could find, it says to push the matrix, then draw, and then pop the matrix back to its original state. I am curious if copying pixels pixel by pixel would count as drawing. I know that image() is affected, but what else? Where can I find a list? What are all the types of drawing and editing of pixels that the matrix transformations affect?
Thanks
If you want to render an image into a PGraphics instance, there's no need to manually access the pixels[] array, pixel by pixel.
Notice that PGraphics provides image() which can take prior transformations (translation/rotation/scale) into account.
Here's a basic example:
PImage testImage;
PGraphics buffer;
void setup(){
size(400,400);
testImage = createImage(100,100,RGB);
//make some noise
for(int i = 0; i < testImage.pixels.length; i++){
testImage.pixels[i] = color(random(255),random(255),random(255));
}
testImage.updatePixels();
//setup PGraphics
buffer = createGraphics(width,height);
buffer.beginDraw();
//apply localised transformations
buffer.pushMatrix();
buffer.translate(width / 2, height / 2);
buffer.rotate(radians(45));
buffer.scale(1.5);
//render transformed image
buffer.image(testImage,0,0);
buffer.popMatrix();
//draw the image with no transformations
buffer.image(testImage,0,0);
buffer.endDraw();
}
void draw(){
image(buffer,0,0);
}
I'm working on creating a voxel engine in LWJGL 3, I have all the basics down (chunks, mesh rendering, etc).
Now I'm working on adding physics using JBullet. This is my first time using JBullet directly, but I've used Bullet before in other 3D engines.
From here I gathered that all I needed to do to create a collision object the same shape as my mesh was the plug the vertices and indices into a TriangleIndexVertexArray and use that for a BvhTriangleMeshShape.
Here is my code:
float[] coords = mesh.getVertices();
int[] indices = mesh.getIndices();
if (indices.length > 0) {
IndexedMesh indexedMesh = new IndexedMesh();
indexedMesh.numTriangles = indices.length / 3;
indexedMesh.triangleIndexBase = ByteBuffer.allocateDirect(indices.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.triangleIndexBase.asIntBuffer().put(indices);
indexedMesh.triangleIndexStride = 3 * Float.BYTES;
indexedMesh.numVertices = coords.length / 3;
indexedMesh.vertexBase = ByteBuffer.allocateDirect(coords.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.vertexBase.asFloatBuffer().put(coords);
indexedMesh.vertexStride = 3 * Float.BYTES;
TriangleIndexVertexArray vertArray = new TriangleIndexVertexArray();
vertArray.addIndexedMesh(indexedMesh);
boolean useQuantizedAabbCompression = false;
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
CollisionShape collisionShape = meshShape;
CollisionObject colObject = new CollisionObject();
colObject.setCollisionShape(collisionShape);
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f)));
dynamicsWorld.addCollisionObject(colObject);
} else {
System.err.println("Failed to extract geometry from model. ");
}
I know that the vertices and indices are valid as I'm getting them here after drawing my mesh.
This seems to somewhat work, but when I try to drop a cube rigidbody onto the terrain, it seems to collide way above the terrain! (I know that the cube is setup correctly because if I remove the mesh collider it hits the base ground plane at y=0).
I thought maybe it was a scaling issue (although I don't see how that could be), so I tried changing:
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f))); to:
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 0.5f)));
But after changing the scale from 1 it acted like the mesh collider didn't exist.
It's hard to find any resources or code for JBullet surrounding mesh collision, and I've been working on this for almost 2 days, so I'm hoping maybe some of you people who have done it before can help me out :)
Update 1:
I created an implementation of the IDebugDrawer so I could draw the debug infomation in the scene.
To test it I ran it with just a basic ground plane and a falling cube. I noticed that when the cube is falling the aabb matches the cube size, but when it hits the floor the aabb becomes significantly larger then it was.
I'm going to assue that this is normal Bullet behavior due to collition bouncing, and look at that later as it doesn't effect my current problem.
I re-enabled the generation of the colliders from the chunk meshs, and saw this:
It appears that the aabb visualization of the chunk is a lot higher then the actual chunk (I know my y positioning of the overall collision object is correct).
I'm going to try to figure out if I can draw the actual collision mesh or not.
Update 2:
As far as I can see looking at the source, the meshof the colliders should be drawing in debug, so I'm not sure why it isn't.
I tried changing the Box rigidbody to a sphere, and it actually rolled across the top of the visualized aabb for the terrain collider. It just rolled flat though and didn't go hit or down where there where hills or dips in the terrain where, so it was obviously just rolling across the flat top of the aabb.
So after adding in the Debug Drawer, I was confused as to why the aabb was x2 larger then it should have been.
After spending hours trying little adjustments, I noticed something odd - there was a 0.25 gap between the collider and the edge of the chunk. I proceeded to zoom out and surprisingly noticed this:
There is an extera row and column of colliders? No that doesn't make sense, there should be 5x5 colliders to match the 5x5 chunks.
Then I counted blocks and realized that the colliders where spanning 64 blocks (my chunks are 32x32!).
I quickly realized that this was a scaling issue, and after adding
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
meshShape.setLocalScaling(new Vector3f(0.5f, 0.5f, 0.5f));
To scale the colliders down by half, everything fit and worked! My "sphere" rolled and came to a stop where there was a hill in the terrain like it should.
My full code for coverting an LWJGL mesh to a JBullet mesh collder is:
public void addMesh(org.joml.Vector3f position, Mesh mesh){
float[] coords = mesh.getVertices();
int[] indices = mesh.getIndices();
if (indices.length > 0) {
IndexedMesh indexedMesh = new IndexedMesh();
indexedMesh.numTriangles = indices.length / 3;
indexedMesh.triangleIndexBase = ByteBuffer.allocateDirect(indices.length*Integer.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.triangleIndexBase.rewind();
indexedMesh.triangleIndexBase.asIntBuffer().put(indices);
indexedMesh.triangleIndexStride = 3 * Integer.BYTES;
indexedMesh.numVertices = coords.length / 3;
indexedMesh.vertexBase = ByteBuffer.allocateDirect(coords.length*Float.BYTES).order(ByteOrder.nativeOrder());
indexedMesh.vertexBase.rewind();
indexedMesh.vertexBase.asFloatBuffer().put(coords);
indexedMesh.vertexStride = 3 * Float.BYTES;
TriangleIndexVertexArray vertArray = new TriangleIndexVertexArray();
vertArray.addIndexedMesh(indexedMesh);
boolean useQuantizedAabbCompression = false;
BvhTriangleMeshShape meshShape = new BvhTriangleMeshShape(vertArray, useQuantizedAabbCompression);
meshShape.setLocalScaling(new Vector3f(0.5f, 0.5f, 0.5f));
CollisionShape collisionShape = meshShape;
CollisionObject colObject = new CollisionObject();
colObject.setCollisionShape(collisionShape);
colObject.setWorldTransform(new Transform(new Matrix4f(new Quat4f(0, 0, 0, 1), new Vector3f(position.x, position.y, position.z), 1f)));
dynamicsWorld.addCollisionObject(colObject);
} else {
System.err.println("Failed to extract geometry from model. ");
}
}
Update 1:
Even though the scaling was the fix for said prolem, it caused me to look deeper and realize that I mistakenly was using to block size (0.5f) for the mesh scaling factor in my mesh view matrix. Changing the scale to 1 like it should be fixed it.
Good night friends.
I'm having trouble drawing a fixed point on the screen when the screen is rotated. I used the method "rotateAround" from the position of the player.
It seems to me. I have to rotate this fixed point also from the position of the player. I use this stretch learned here in stackoverflow.
public void rotate(Vector3 position, Vector3 centerPoint){
this.cosTemp = MathUtils.cosDeg(this.anguloAtual);
this.senTemp = MathUtils.sinDeg(this.anguloAtual);
this.xTemp = centerPoint.x + ((position.x - centerPoint.x) * this.cosTemp) - ((position.y - centerPoint.y) * this.senTemp);
this.yTemp = centerPoint.y + ((position.y - centerPoint.y) * this.cosTemp) + ((position.x - centerPoint.x) * this.senTemp);
position.set(this.xTemp, this.yTemp, 0);
}
In the drawing that the player on the screen. I used the position of the player, then called "camera.project" then the method "rotate". The fixed point appears, however it is not exactly fixed.
I used the example of a fixed point slightly ahead of the player.
public void meDesenhar(SpriteBatch spriteBatch) {
spriteBatch.begin();
this.spritePlayer.setPosition(this.positionPlayer.x - (this.spritePlayer.getWidth() / 2),
this.positionPlayer.y - this.spritePlayer.getHeight() / 2);
this.spritePlayer.draw(spriteBatch);
spriteBatch.end();
originPosition.set(positionPlayer, 0);
fixedPosition.set(positionPlayer.x, positionPlayer.y + 10, 0);
cameraTemp.project(fixedPosition);
cameraTemp.project(originPosition);
cameraManagerTemp.rotate(fixedPosition, originPosition);
Debugagem.drawPointInScreen(Color.BLUE, fixedPosition);
}
My questions:
1 - I am doing something wrong, or just it is a result of rounding? I realized when debugging. The position of the player changed a little every rotation after the "camera.project". Example position (540, 320) turned (539.99, 320.013)
2 - I tried using and enjoying the SpriteBatch the draw method to perform the rotation however, could not make the rotation from the player. I would arrive at the same result.
3 - Can I use two cameras? Each camera would be a layer. A camera at the map and the player would be. The other for fixed point. It's viable? I could not find any example that works with more than one camera at the same time. Anyone know any examples please. I'm not talking about huds or cameras to stage.
Video follows.
https://www.youtube.com/watch?v=1Vg8haN5ULE
Thank you.
It can be result of rounding because its moving a pixel.
You can calculate rotation from the player but its not necessary.
Of course you can use multiple cameras in your game and you should also in this case.
Its few screenshot from my old projects that i used multiple cameras
As you can see you can even use different type of cameras like ortho and perspective both 2D and 3D.
Just create new camera like first one and change projection matrix
camrotate = new OrthographicCamera(540, 960);
//...
camfixed = new OrthographicCamera(540, 960);
//...
And in render method
batch.setProjectionMatrix(camrotate.combined);
batch.begin();
//draw in camrotate now
//...
//...
batch.end();
batch.setProjectionMatrix(camfixed.combined);
batch.begin();
//draw fixed elements now
//...
//...
batch.end();
//add one more camera if you need
Edit:
Change projection matrix outside of batch.begin()/end() otherwise the current batch will flushed.
Few days ago I figured out how to do some scrolling in LibGdx. Now I'm triying to do something related. I want to repeat the background. My scrolling follows a ship (Is an s[ace ship game). In the background there is a space photo loaded as a Texture. When the ship reach the end of the backgorund, It keeps going and there's no background anymore. I have read about wrap but I don't really understand How It works. I did that:
px=new Pixmap(Gdx.files.internal("fondo.jpg"));
background=new Texture(px);
background.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
And then, in my render method
spriteBatch.begin();
spriteBatch.draw(background,0,0,500,50);
drawShip();
spriteBatch.end();
Of course It doesn't work, It only draws the background once. I don't know how make this wrap method work. Any help?
SOLUTION
I figured It out. It's not a nice code but It works.
First I declare two Textures with the same image
bck1=new Texture(Gdx.files.internal("fondo.jpg"));
bck2=new Texture(Gdx.files.internal("fondo.jpg"));
Also I declare two variables like this to specify the X value of the position of each bck
int posXBck1=0,posXBck2=0;
Then I use that in Render()
public void calculoPosicionFondos(){
posXBck2=posXBck1+ANCHODEFONDO;
if(cam.position.x>=posXBck2+cam.viewportWidth/2){
posXBck1=posXBck2;
}
}
Where:
ANCHODEFONDO is the width of my background
Cam is an OtrhoCam.
So I said that if the cam is in bck2 (wich means that you can't see bck1 anymore) It change positions, giving bck1 de position of bck2 and, in the next render loop, recalculating bck2
Then just paint both bck in your render mode.
Like Teitus said, do not load your texture multiple times, ever! Anyway, you where on the right track with the wrapper:
texture.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
Now you can just use the draw method with the source location. The source location is the area you choose to draw on the texture.
batch.draw(texture, x, y, srcX, srcY, srcWidth, srcHeight)
To scroll your texture from right to left all you have to do is increase srcX incrementally. So create a int that increments in the update/render method.
int sourceX = 0;
//render() method
//Increment the variable where to draw from on the image.
sourceX += 10;
//Simply draw it using that variable in the srcX.
batch.draw(YourTexture, 0, 0, sourceX, 0, screenWidth, screenHeight);
Because you are wrapping the texture it will wrap/loop and scroll indefinitely. There might be a issue with the sourceX int if the game runs for a very long time because a int can only hold 2147483647. It takes a while but you can fix it by subtracting the image width each time the number goes over the total image width.
Don't to this, please:
bck1=new Texture(Gdx.files.internal("fondo.jpg"));
bck2=new Texture(Gdx.files.internal("fondo.jpg"));
That will load your big background texture twice. That's a complete waste. If you want to keep your solution at least do:
bck1=new Texture(Gdx.files.internal("fondo.jpg"));
bck2=bkg1;
Regarding the texture Wrapping. If your texture is 500px wide, and you draw a 500px sprite, you won't see any repetition. If you want it repeated 2 times, draw it 1000px wide with 0-2 texture coordinates.
I'm not sure how spriteBatch handles the call you posted, you could try that one, or may be use the overload that uses a texture region and set your region manually.
I see this is a pretty old question, but I think there is an easier way to accomplish background scrolling. Just use the Sprite class. Here is a snippet I use for layered background images that scroll from right to left.
public class LevelLayer
{
public float speedScalar = 1;
private List<Sprite> backgroundSprites = new ArrayList<Sprite>();
public LevelLayer()
{
}
public void addSpriteLayer(Texture texture, float startingPointX, float y, int repeats)
{
for (int k = 0; k < repeats; k++)
{
Sprite s = new Sprite(texture);
s.setX(startingPointX + (k*texture.getWidth()));
s.setY(y);
backgroundSprites.add(s);
}
}
public void render(SpriteBatch spriteBatch, float speed)
{
for (Sprite s : backgroundSprites)
{
float delta = s.getX() - (speed * speedScalar);
s.setX(delta);
s.draw(spriteBatch);
}
}
}
Then you can use the same texture or series of textures like so:
someLayer.addSpriteLayer(sideWalkTexture1, 0, 0, 15);
someLayer.addSpriteLayer(sideWalkTexture2, 15 * sideWalkTexture1.getWidth(), 0, 7);
I change background repeating sections randomly in code and make new ones or reset existing sets when they go off screen. All the layers go to a pool and get pulled randomly when a new one is needed.
SOLUTION
I figured It out. It's not a nice code but It works.
First I declare two Textures with the same image
bck1=new Texture(Gdx.files.internal("fondo.jpg"));
bck2=new Texture(Gdx.files.internal("fondo.jpg"));
Also I declare two variables like this to specify the X value of the position of each bck
int posXBck1=0,posXBck2=0;
Then I use that in Render()
public void calculoPosicionFondos(){
posXBck2=posXBck1+ANCHODEFONDO;
if(cam.position.x>=posXBck2+cam.viewportWidth/2){
posXBck1=posXBck2;
}
}
Where:
ANCHODEFONDO is the width of my background
Cam is an OtrhoCam.
So I said that if the cam is in bck2 (wich means that you can't see bck1 anymore) It change positions, giving bck1 de position of bck2 and, in the next render loop, recalculating bck2
Then just draw both bck in your render()
I am trying to find the coordinates of my mouse on a flat 3D surface. After googling a bit on that, I found out that you use gluUnProject for doing so. So, I implemented that. Here is my code (when taking away the parts that are not interesting):
public class Input {
private FloatBuffer modelView = BufferUtils.createFloatBuffer(16);
private FloatBuffer projection = BufferUtils.createFloatBuffer(16);
private IntBuffer viewport = BufferUtils.createIntBuffer(16);
private FloatBuffer location = BufferUtils.createFloatBuffer(3);
private FloatBuffer winZ = BufferUtils.createFloatBuffer(1);
public float[] getMapCoords(int x, int y)
{
modelView.clear().rewind();
projection.clear().rewind();
viewport.clear().rewind();
location.clear().rewind();
winZ.clear().rewind();
glGetFloat(GL_MODELVIEW_MATRIX, modelView);
glGetFloat(GL_PROJECTION_MATRIX, projection);
glGetInteger(GL_VIEWPORT, viewport);
float winX = (float)x;
float winY = (float)viewport.get(3) - (float)y;
glReadPixels(x, (int)winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, winZ);
gluUnProject(winX, winY, winZ.get(0), modelView, projection, viewport, location);
return new float[] {location.get(0), location.get(1), location.get(2)};
}
}
When I call this function, passing in the X and Y coordinates of the mouse, I get some numbers that increase by 100 for every pixel I move with my mouse (it should be around 1). After doing some prints from various variables, I found out that the winZ buffer contains the value 1.0. My gluPerspective is set up in such a way that its near clipping point it at 0.1 and its far point at 10000, which would explain why the number is increasing that rapidly. Yet I don't know how to force openGl to use my flat plane instead for finding this distance.
So now I am wondering; if this is the correct/best/easiest method for finding the mouse coordinates on a surface in the 3D world, what could I be doing wrong? If it is not, what is a better way of doing it?
Yes, this is the correct method to use. You are probably just calling it at the point where the matrices do not contain "good values" (might be caused by glPush/PopMatrix() functions in your code).
To confirm you are getting good coordinates, draw a single GL_POINT at the coordinates you get, after you get them (disable depth test and increase point size to, say, 10 pixels). If the point is moving under your mouse, then the calculated coords are correct but the matrices are not.