I'm trying to display a simple textured trapezoid (a road in perspective). For this I'm using SpriteBatch.draw with vertexes array as a parameter. But the result is unexpected.
What I expected:
What I got:
What exactly gone wrong? Or maybe I'm using the wrong method?
Here is the code:
#Override
public void create () {
texture = new Texture("road.jpg");
spriteBatch = new SpriteBatch();
float color = Color.toFloatBits(255, 255, 255, 255);
verts = new float[]{
0, 0, color, 0, 0,
Gdx.graphics.getWidth()/2-100, 300, color, 0, 1,
Gdx.graphics.getWidth()/2+100, 300, color, 1, 1,
Gdx.graphics.getWidth(), 0, color, 1, 0,
};
shapeRenderer = new ShapeRenderer();
}
#Override
public void render () {
Gdx.gl.glClearColor(0, 0.5f, 1f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
shapeRenderer.begin(ShapeRenderer.ShapeType.Filled);
shapeRenderer.rect(0, 0, 640, 300);
shapeRenderer.end();
spriteBatch.begin();
spriteBatch.draw(texture, verts, 0, verts.length);
spriteBatch.end();
}
From documentation
draw(Texture texture, float[] spriteVertices, int offset, int count)
Draws a rectangle using the given vertices.
LibGDX draws these rectangles as two triangles, so it splits the texture. The problem is that these triangles in rectangle are the same size, but in trapezoid they are not, so they become distorted.
Solution would be to use projection or mesh.
Related
I want to use libgdx ShapeRenderer to mock my graphics but i get a small nevertheless annoying mismatch of coordinates. As mentioned at libgdx wiki, screen coordinates range: (0,0) (lower left corner) to (Gdx.graphics.getWidth()-1, Gdx.graphics.getHeight()-1) (upper right corner).
Here is a code sample of what i'm trying to do:
public static final int APP_WIDTH = 160;
public static final int APP_HEIGHT = 120;
private SpriteBatch batch;
private ShapeRenderer shapeRenderer;
private Texture texture;
...
public void render() {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
if (DEBUG_MODE) {
shapeRenderer.setAutoShapeType(true);
shapeRenderer.setColor(Color.RED);
shapeRenderer.begin();
shapeRenderer.rect(0, 0, APP_WIDTH - 1, APP_HEIGHT - 1);
shapeRenderer.end();
} else {
batch.begin();
batch.draw(texture, 0, 0);
batch.end();
}
}
And the result i get:
image link
As you can see, texture (160x120) is drawn exactly like it should be but rectangle (that was drawn with shapeRenderer) has -1 x-axis offset.
Of course, I can just draw everything at x+1 but maybe I'm doing something wrong and there is a way to make coordinates match?
I already have a drawn model, but it has flat shading (for what I understand it should be smooth by default...)
This is the initial config:
private void SetLightningAndMaterials(){
//float[] lightPos = {1, 1, 1, 0};
float[] lightPos = {0, 0, 1, 0};
float[] lightColorDiffuse = {1, 1, 1, 1};
float[] lightColorAmbient = {0.2f, 0.2f, 0.2f, 1};
gl.glShadeModel(GL.GL_SMOOTH);
gl.glLightfv(GL.GL_LIGHT1, GL.GL_POSITION, lightPos, 0);
gl.glLightfv(GL.GL_LIGHT1, GL.GL_DIFFUSE, lightColorDiffuse, 0);
gl.glLightfv(GL.GL_LIGHT1, GL.GL_AMBIENT, lightColorAmbient, 0);
gl.glEnable(GL.GL_LIGHT1);
gl.glEnable(GL.GL_LIGHTING);
gl.glMaterialfv(GL.GL_FRONT, GL.GL_AMBIENT, ambientColour, 0);
gl.glMaterialfv(GL.GL_FRONT, GL.GL_DIFFUSE, mesh.colour, 0);
gl.glEnable(GL.GL_LIGHTING);
gl.glEnable(GL.GL_LIGHT0);
float[] noAmbient =
{ 0.1f, 0.1f, 0.1f, 1f }; // low ambient light
float[] spec =
{ 1f, 0.6f, 0f, 1f }; // low ambient light
float[] diffuse =
{ 0.5f, 0.5f, 0.5f, 1f };
gl.glLightfv(GL.GL_LIGHT0, GL.GL_AMBIENT, noAmbient, 0);
gl.glLightfv(GL.GL_LIGHT0, GL.GL_SPECULAR, spec, 0);
gl.glLightfv(GL.GL_LIGHT0, GL.GL_DIFFUSE, diffuse, 0);
gl.glLightfv(GL.GL_LIGHT0, GL.GL_POSITION, new float[]{0,0,10,1}, 0);
}
And this is how I draw the model:
public void Draw(GL gl, GLU glu){
Vec3d normal;
MassPoint vertex1, vertex2, vertex3;
int faceIndex=0;
Face surfaceFace;
for (faceIndex=0; faceIndex<surfaceFaces.size();faceIndex++){
surfaceFace = surfaceFaces.get(faceIndex);
surfaceFace.recalculateNormal();
vertex1 = surfaceFace.vertex1;
vertex2 = surfaceFace.vertex2;
vertex3 = surfaceFace.vertex3;
normal = surfaceFace.normal;
gl.glBegin(gl.GL_TRIANGLES);
gl.glNormal3d(normal.x, normal.y, normal.z);
gl.glMaterialfv(GL.GL_FRONT, GL.GL_DIFFUSE, colour, 0);
gl.glVertex3d(vertex1.position.x, vertex1.position.y, vertex1.position.z);
gl.glVertex3d(vertex2.position.x, vertex2.position.y, vertex2.position.z);
gl.glVertex3d(vertex3.position.x, vertex3.position.y, vertex3.position.z);
gl.glEnd();
}
}
I want to believe there's an easy way of solving this without having to create a shader (I don't have any idea how to set these in Java).
I'm using JOGL 1 by the way, and is probably an old version (the imports are like javax.media.opengl.*).
I managed to solve the problem. For smoothness to work, the drawing expects 3 normals (one per vertex), I was only passing 1 normal (one per face).
Here's the new code for the drawing:
public void Draw(GL gl, GLU glu) {
Vec3d[] normalsPerVertex = new Vec3d[3];
MassPoint vertex1, vertex2, vertex3;
int faceIndex=0;
Face surfaceFace;
for (faceIndex=0; faceIndex<surfaceFaces.size();faceIndex++){
surfaceFace = surfaceFaces.get(faceIndex);
vertex1=surfaceFace.vertex1;
normalsPerVertex[0] = vertex1.CalcNormal();
vertex2=surfaceFace.vertex2;
normalsPerVertex[1] = vertex2.CalcNormal();
vertex3=surfaceFace.vertex3;
normalsPerVertex[2] = vertex3.CalcNormal();
gl.glBegin(GL.GL_TRIANGLES);
gl.glNormal3d(normalsPerVertex[0].x, normalsPerVertex[0].y, normalsPerVertex[0].z);
gl.glVertex3d(vertex1.position.x, vertex1.position.y, vertex1.position.z);
gl.glNormal3d(normalsPerVertex[1].x, normalsPerVertex[1].y, normalsPerVertex[1].z);
gl.glVertex3d(vertex2.position.x, vertex2.position.y, vertex2.position.z);
gl.glNormal3d(normalsPerVertex[2].x, normalsPerVertex[2].y, normalsPerVertex[2].z);
gl.glVertex3d(vertex3.position.x, vertex3.position.y, vertex3.position.z);
gl.glEnd();
}
}
The calculated normal for each vertex is the media of all the faces connected to that vertex. Here's the code for that:
public Vec3d CalcNormal() {
Vec3d normalMedia = new Vec3d();
for (Face face : facesRelated) {
face.recalculateNormal();
normalMedia.add(face.normal);
}
normalMedia.mul(1d/facesRelated.size());
return normalMedia;
}
Hope this helps someone else.
I'm attempting to get a texture to show up on on a square made from a triangle fan, the texture is made from a Canvas.
The main color is just yellow and a smaller box is drawn inside of it, but the final texture is just solid yellow.
Yellow square with no texture (picture)
Fragment shadder:
public static final String fragmentShaderCode_TEXTURED =
"precision mediump float;" +
"varying vec2 v_texCoord;" +
"uniform sampler2D s_texture;" +
"void main() {" +
//"gl_FragColor = vColor;"+
" gl_FragColor = texture2D( s_texture, v_texCoord );" +
"}";
Texture generation:
public static int loadGLTexture(String s){
Rect r = new Rect();
ThreadDat.get().paint.getTextBounds(s, 0, 1, r); //get string dimensions, yeilds 8x9 pxls
Bitmap bitmap = Bitmap.createBitmap(bestSize(r.width()),bestSize(r.height()), Bitmap.Config.ARGB_8888);
//example size is 16x16pxls
Log.i("TextureSize", r.width() + " " + r.height());
Canvas c = new Canvas(bitmap);
//some temporary test code setting the background yellow
//Paint colors are stored per thread, only one right now
ThreadDat.get().paint.setARGB(255, 255, 255, 0);
c.drawRect(0, 0, c.getWidth(), c.getHeight(), ThreadDat.get().paint);
//type the letter, in this case "A" in blue
ThreadDat.get().paint.setARGB(255, 0, 0, 255);
ThreadDat.get().paint.setTypeface(Typeface.create("Consolas", Typeface.NORMAL));
c.drawText(s.charAt(0) + "", 0, 0, ThreadDat.get().paint);
//draw another square that is half width and height, should be Blue
c.drawRect(0, 0, c.getWidth() / 2, c.getHeight() / 2, ThreadDat.get().paint);
return loadTexture(bitmap);
}
Draw code:
#Override
public void draw() {
//clearing any error to check if program has an error
GLES20.glGetError();
//get the compiled shader for textured shapes
int prgm = MyGLRenderer.getSTRD_TXTR_SHDR();
GLES20.glUseProgram(prgm);
//check for new errors and log to logcat (nothing)
MyGLRenderer.logError();
//setup projection view matrix
float[] scratch = new float[16];
Matrix.setIdentityM(scratch, 0);
Matrix.multiplyMM(scratch, 0, MyGLRenderer.getmMVPMatrix(), 0, scratch, 0);
//apply translations to matrix
Matrix.translateM(scratch, 0, xOffset, yOffset, zOffset);
Matrix.setRotateEulerM(scratch, 0, yaw, pitch, roll);
//get vPosition variable handle from chosen shader
mPosHandle = GLES20.glGetAttribLocation(prgm, "vPosition");
GLES20.glEnableVertexAttribArray(mPosHandle);
GLES20.glVertexAttribPointer(mPosHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT,
false, VERTEX_STRIDE, vertexBuffer);
////pass color data (set to white)
//mColorHandle = GLES20.glGetUniformLocation(prgm, "vColor");
//GLES20.glUniform4fv(mColorHandle, 1, color, 0);
//use texture0
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
//use texture from -> int textureID = MyGLRenderer.loadGLTexture("A");
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureID);
//get handel for "uniform sampler2D s_texture;" to set value
int txtureHandle = GLES20.glGetUniformLocation(prgm, "s_texture");
GLES20.glUniform1i(txtureHandle, 0); //set s_texture to use binded texture 0
//pass in texture coords (u,v / s,t)
int textureCoordHndl = GLES20.glGetAttribLocation(prgm, "a_texCoord");
GLES20.glVertexAttribPointer(textureCoordHndl, 2/*size, 2 points per vector*/,
GLES20.GL_FLOAT, false, 0, textureBuffer);
//pass in the model view projection matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(prgm, "uMVPMatrix");
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, scratch, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, vertex_count);
GLES20.glDisableVertexAttribArray(mPosHandle);
MyGLRenderer.logError();
}
I tried using the same coordinate set as used in this example:
Vertices for square:
{ 0, 0, 0, //bottom left
0, height, 0, //topLeft
width, 0, 0, // bottom Right
width, height, 0)}; //topRight
Texture coords:
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
Similar Issue
This does sound like there is an issue with texture coordinates. Since the whole thing is yellow I would suspect that the v_texCoord is always (0,0) in your fragment shader so the first texture pixel is being repeated.
The texture itself seems to be ok since the color is being drawn. Without the texture you would most likely see a black rectangle.
Anyway to handle such issues you need to be a bit inventive in debugging, testing. For testing the coordinates use gl_FragColor = vec4( v_texCoord.x, v_texCoord.y, .0, 1.0 );. This should output a gradient rectangle where top left is black, top right is red, bottom left is green. If you do not see this result then your texture coordinates are incorrect. In this case first check if the varying is correctly connected from the vertex shader. You may use v_texCoord = vec2(1.0, 0.0) in the vertex shader and the result should be a red rectangle (assuming you still have the previous test in the fragment shader). If the rectangle is red then the issue is most likely in your handles and not in the shaders (otherwise the varying is incorrectly set. Maybe a mismatch in naming). Check what is the value of the handle textureCoordHndl. If this is a negative value then the handle was not connected. This is most likely due to a mismatch in the naming.
From further inspection:
You are missing the enabling of the attribute for texture coordinates GLES20.glEnableVertexAttribArray(textureCoordHndl);. Remember that each of the attributes must be enabled before you use them.
I can use the below code to scale and translate a square using OpenGLES. But I'm not sure how to calculate the translation and scale factors. For example using the below picture of the OpenGL coordinate system and Matrix.translateM(mViewMatrix, 0, .5f, 0, 0); I would expect the square to be drawn halfway to the right of the screen, but instead it's drawn halfway to the left from the center. However Matrix.translateM(mViewMatrix, 0, 0, .5f, 0); does translate the square halfway up the screen from the center.
How would I translate and scale in order to programmatically draw N squares side by side horizontally filling the top of the screen?
#Override
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Translate by some amount
// Matrix.translateM(mViewMatrix, 0, ?, ?, 0);
// Scale by some amount
// Matrix.scaleM(mViewMatrix, 0, ?, ?, 1);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
// Draw square
mSquare.draw(mMVPMatrix);
}
I am not sure why you would need translate and scale to fill up a row of squares. To get rows of squares programmatically in openGL ES I would just make a bunch of squares initialized where you want them. An edited snippet from one of my projects went something like this:
public void onSurfaceCreated(GL10 unused, EGLConfig config){
GLES20.glClearColor(bgr, bgg, bgb, bga);
float z=0.0f;
float y=0.0f;
for(float i=(-worldwidth);i<worldwidth;i+=(cellwidth)){
square=new Square(i,y,z);
cellvec.addElement(square);
}
}
public void onDrawFrame(GL10 unused) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
for(int i=0;i<cellvec.size();i++){
cellvec.elementAt(i).draw(mMVPMatrix);
}
}
I am not entirely sure if something like this is what your looking for but it it seems to get the result of a row of squares like you wanted.
I have a simple plane Mesh that is 100x100. Following the libgdx tutorials I've successfully mapped a texture over the mesh. However, it looks odd right from the start, and even stranger when I zoom out. What I'm aiming for is a simple grid pattern.
Here's the plane zoomed in:
Now zoomed out:
The texture itself is a small 64x64 square, outlined.
My Grid class looks like this (Grid extends gdx.graphics.Mesh):
private final int HALFWIDTH = 50, HALFLENGTH = 50;
private Texture texture;
public Grid() {
super( true, 4, 4,
new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.ColorPacked, 4, "a_color"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoords")
);
setVertices(new float[] {
-HALFWIDTH, -HALFLENGTH, -2f, Color.toFloatBits(255, 0, 0, 255), -HALFWIDTH, HALFLENGTH,
HALFWIDTH, -HALFLENGTH, -2f, Color.toFloatBits(0, 255, 0, 255), HALFWIDTH, -HALFLENGTH,
-HALFWIDTH, HALFLENGTH, -2f, Color.toFloatBits(0, 0, 255, 255), -HALFWIDTH, HALFLENGTH,
HALFWIDTH, HALFLENGTH, -2f, Color.toFloatBits(0, 255, 255, 0), HALFWIDTH, HALFLENGTH
});
setIndices(new short[] { 0, 1, 2, 3 });
this.texture = new Texture( Gdx.files.internal("assets/grid.png") );
this.texture.setWrap( TextureWrap.Repeat, TextureWrap.Repeat );
this.texture.setFilter( TextureFilter.Linear, TextureFilter.Linear );
}
void draw() {
Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D);
this.texture.bind();
render(GL10.GL_TRIANGLE_STRIP, 0, 4);
}
I'm not 100% sure, but I have a strong suspicion this is because you're using Linear interpolation on your texture. When you zoom in and out on the texture, OpenGL has to choose how to display the texture at different resolutions. Using linear interpolation is well-known to cause the effect your screenshots show (sometimes called Zagging). It's due to the thin lines (high information density) in the texture you are using.
Try changing your texture mode to use Mip Maps.
this.texture.setFilter(TextureFilter.MipMap, TextureFilter.MipMap);
This will pre-compute scaled versions of your textures and avoid the zagging effect. Let me know if this works.
Not sure if this will help, but your HALFWIDTH is not the same as the first one.
-HALFWIDTH, **-HALFLENGTH**, -2f, Color.toFloatBits(255, 0, 0, 255), -HALFWIDTH, **HALFLENGTH**,
HALFWIDTH, -HALFLENGTH, -2f, Color.toFloatBits(0, 255, 0, 255), HALFWIDTH, -HALFLENGTH,
-HALFWIDTH, HALFLENGTH, -2f, Color.toFloatBits(0, 0, 255, 255), -HALFWIDTH, HALFLENGTH,
HALFWIDTH, HALFLENGTH, -2f, Color.toFloatBits(0, 255, 255, 0), HALFWIDTH, HALFLENGTH
Is not negative, like it is in the first coordinate. This may be throwing off the texturing calculation.