Assgining textures of .mtl file in OpenGL ES 2 Android - java

I have succesfully loaded a .obj file into vertex arrays in Java. However I was wondering if anyone could guide me in how to assign the texture to my object based on the mtl file I have.
Until now I can just assign one texture to the entire obj file using this code:
public static int loadTexture(final Context context, final int resourceId)
{
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0)
{
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false; // No pre-scaling
// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
Vertex Shader:
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec4 a_Color; // Per-vertex color information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec4 v_Color; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the color.
v_Color = a_Color;
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
Fragment Shader
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos; // The position of the light in eye space.
uniform sampler2
D u_Texture; // The input texture.
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec4 v_Color; // This is the color from the vertex shader interpolated across the
// triangle per fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));
// Add ambient lighting
diffuse = diffuse + 0.2;
// Multiply the color by the diffuse illumination level and texture value to get final output color.
gl_FragColor = (v_Color * diffuse * texture2D(u_Texture, v_TexCoordinate));
}

Related

Shader gives wrong fragment color

I've run into the strangest problem I've ever had in my shader experience. I created some test shader code (see below) and ran it on a simple texture.
Basically what I was trying to do is make my shader check the color of a fragment, and if it was within a certain range it would color that fragment according to a uniform color variable. The problem I'm having is that my shader does not correctly recognize the color of a fragment. I even went as far as to check if the red portion of the color is equal to one and it always returns true for every fragment. Yet if I use the same shader to draw the original texture it works just fine.
Why is this happening? The shader compiles without any errors what so ever. I feel like I'm missing something obvious...
Code (if you have access to LibGDX you can run this code for yourself).
// Vertex
#version 330
in vec4 a_position;
in vec4 a_color;
in vec2 a_texCoord0;
out vec4 v_color;
out vec2 v_texCoord0;
uniform mat4 u_projTrans;
void main() {
v_color = a_color;
v_texCoord0 = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
// Fragment
#version 330
in vec4 v_color;
in vec2 v_texCoord0;
out vec4 outColor;
uniform vec4 color;
uniform sampler2D u_texture;
void main() {
// This is always true for some reason...
if(v_color.r == 1.0) {
outColor = color;
} else {
// But if I run this code it draws the original texture just fine with the correct colors.
outColor = v_color * texture2D(u_texture, v_texCoord0);
}
}
// Java code.
// Creating sprite, shader and sprite batch.
SpriteBatch batch = new SpriteBatch();
Sprite sprite = new Sprite(new Texture("testTexture.png"));
ShaderProgram shader = new ShaderProgram(Gdx.files.internal("vertex.vert"),
Gdx.files.internal("fragment.frag"));
// Check to see if the shader has logged any errors. Prints nothing.
System.out.println(shader.getLog());
// We start the rendering.
batch.begin();
// We begin the shader so we can load uniforms into it.
shader.begin();
// Set the uniform (works fine).
shader.setUniformf("color", Color.RED);
// We end the shader to tell it we've finished loading uniforms.
shader.end();
// We then tell our renderer to use this shader.
batch.setShader(shader);
// Then we draw our sprite.
sprite.draw(batch);
// And finally we tell our renderer that we're finished drawing.
batch.end();
// Dispose to release resources.
shader.dispose();
batch.dispose();
sprite.getTexture().dispose();
The texture:
You have 2 input colors for your fragment shader: one sent as a vertex attribute and one as a texture. You intend to check the texture color but instead you check for the color value sent as the vertex attribute.
// Vertex
#version 330
in vec4 a_position;
in vec4 a_color; // <--- A color value is passed as a vertex attribute
in vec2 a_texCoord0;
out vec4 v_color;
out vec2 v_texCoord0;
uniform mat4 u_projTrans;
void main() {
v_color = a_color; // <--- you are sending it to fragment shader
v_texCoord0 = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
// Fragment
#version 330
in vec4 v_color; // <--- coming from vertex buffer
in vec2 v_texCoord0;
out vec4 outColor;
uniform vec4 color;
uniform sampler2D u_texture;
void main() {
// This is always true for some reason...
if(v_color.r == 1.0) { // <--- and then checking this vertex attribute color
outColor = color; // instead of what you send from texture
} else {
...
}
}

LWJGL change line's color between vertices

I'm using java and LWJGL/openGL to create some graphics. For rendering I use the following:
Constructor of RawModel:
public RawModel(int vaoID, int vertexCount, String name){
this.vaoID = vaoID;
this.vertexCount = vertexCount;
this.name = name;
}
Renderer:
public void render(Entity entity, StaticShader shader){
RawModel rawModel = entity.getRaw(active);
GL30.glBindVertexArray(rawModel.getVaoID());
GL20.glEnableVertexAttribArray(0);
GL20.glEnableVertexAttribArray(1);
GL20.glEnableVertexAttribArray(3);
Matrix4f transformationMatrix = Maths.createTransformationMatrix(entity.getPosition(),
entity.getRotX(), entity.getRotY(), entity.getRotZ(), entity.getScale());
shader.loadTransformationMatrix(transformationMatrix);
GL11.glDrawElements(GL11.GL_TRIANGLES, rawModel.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL20.glDisableVertexAttribArray(3);
GL30.glBindVertexArray(0);
}
I'm using GL11.GL_TRIANGLES cuz that's how I can make models' lines show up instead of faces. But when I set color for a vertex, it just colors surrounding lines to the color set, in some cases all of it's lines just take the color of surrounding vertices. How could I make it so that it kind of combines those 2 colors depending on the distance of each vertex and the colors?
Fragment shader:
#version 400 core
in vec3 colour;
in vec2 pass_textureCoords;
out vec4 out_Color;
uniform sampler2D textureSampler;
void main(void){
vec4 textureColour = texture(textureSampler, pass_textureCoords);
//out_Color = texture(textureSampler, pass_textureCoords);
out_Color = vec4(colour, 1.0);
}
Vertex shader:
#version 400 core
in vec3 position;
in vec2 textureCoords;
in int selected;
out vec3 colour;
out vec2 pass_textureCoords;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
void main(void){
gl_Position = projectionMatrix * viewMatrix * transformationMatrix * vec4(position, 1.0);
pass_textureCoords = textureCoords;
if(selected == 1){
colour = vec3(200, 200, 200);
}
else{
colour = vec3(0, 0, 0);
}
gl_BackColor = vec4(colour, 1.0);
}
I'm using GL11.GL_TRIANGLES cuz that's how I can make models' lines show up instead of faces.
Well, GL_TRIANGLES is for rendering triangles, which are faces. If you only want the models' lines, you can use one of the line drawing modes (GL_LINES, GL_LINE_LOOP, GL_LINE_STRIP etc).
However, a better way is to enable
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
, which makes only the outline of the triangles show up.
You can switch it off again with
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
How could I make it so that it kind of combines those 2 colors depending on the distance of each vertex and the colors?
I'm not sure what you mean with this; by default values passed from the vertex shader to the fragment shader are already interpolated across the mesh; so the color a fragment receives already depends on all the vertices' color and distance.
Edit:
In the vertex shader:
if(selected == 1){
colour = vec3(200, 200, 200);
}
I assume you want to assign an RGB value of (200, 200, 200), which is a very light white. However, OpenGL uses floating-point color components in the range 0.0 to 1.0. Values that go above or below this range are clipped. The value of colour is now interpolated across the fragments, which will receive values with components far higher than 1.0. These will be clipped to 1.0, meaning all your fragments appear white.
So, in order to solve this issue, you have to instead use something like
colour = vec3(0.8, 0.8, 0.8);

Texture on mesh doesnt render, just shows black libgdx gl20

edit--
I've updated my code after TenFour04s answer but still just shows black.
I've updated my libgdx and its required me to use gl20 which has lead me to make a few changes
most of it works fine except when trying to do texture the mesh. This currently shows surface mesh as black and doesnt show the ground mesh at all. with some changes I can get it to show both surface and ground meshes as black.
I've played around with binding and the order of surTexture.bind and grdTexture.bind and using numbers less than 16 and render and i've got it to use the surface texture as the texture for everything except the surface and ground.
Can anyone see where I might be going wrong with this?
// creating a mesh with maxVertices set to vertices,size*3
groundMesh = new Mesh(true, vertices.size*3, vertices.size*3,
new VertexAttribute(Usage.Position,2,ShaderProgram.POSITION_ATTRIBUTE),
new VertexAttribute(Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE));
groundMesh.setVertices(temp);
short[] indices = new short[vertices.size*2];
for(int i=0;i<vertices.size*2;i++){
indices[i] = (short)i;
}
groundMesh.setIndices(indices);
surfaceMesh = new Mesh(true, vertices.size*3, vertices.size*3,
new VertexAttribute(Usage.Position,3,ShaderProgram.POSITION_ATTRIBUTE),
new VertexAttribute(Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE));
...
grdTexture = new Texture(Gdx.files.internal("data/img/leveltest/ground.png"));
// Gdx.graphics.getGL20().glActiveTexture(GL20.GL_TEXTURE16);
//says that setWrap and SetFilter bind the texture so I thought I might have to set
//activetexture here but does nothing.
grdTexture.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
grdTexture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
surTexture = new Texture(Gdx.files.internal("data/img/leveltest/surface.png"));
// Gdx.graphics.getGL20().glActiveTexture(GL20.GL_TEXTURE17);
surTexture.setWrap(TextureWrap.Repeat, TextureWrap.ClampToEdge);
//TODO change these filters for better quality
surTexture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
drawWorld gets called inside render()
public void drawWorld(SpriteBatch batch,OrthographicCamera camera) {
batch.begin();
batch.setProjectionMatrix(camera.combined);
layers.drawLayers(batch);
if ((spatials != null) && (spatials.size > 0)){
for (int i = 0; i < spatials.size; i++){
spatials.get(i).render(batch);
}
}
batch.end();
drawGround(camera);
}
private void drawGround(OrthographicCamera camera){
shader.begin();
shader.setUniformMatrix("u_projTrans", camera.combined);
grdTexture.bind(0);
shader.setUniformi("u_texture", 0);
//changed GL_TRIANGLES to GL_TRIANGLE_STRIP to render meshes correctly after changing to GL20
groundMesh.render(shader, GL20.GL_TRIANGLE_STRIP);
surTexture.bind(0);
shader.setUniformi("u_texture", 0);
surfaceMesh.render(shader, GL20.GL_TRIANGLE_STRIP);
shader.end();
}
fragment.glsl
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
varying LOWP vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
}
vertex.glsl
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_color.a = v_color.a * (256.0/255.0);
v_texCoords = a_texCoord;
gl_Position = u_projTrans * a_position;
}
In your vertex shader, you are using a_texCoord, but in your mesh constructor, you have effectively named your attributes a_texCoord16 and a_texCoord17 by using ShaderProgram.TEXCOORD_ATTRIBUTE+"16" and ShaderProgram.TEXCOORD_ATTRIBUTE+"17".
Since you are not multi-texturing, I would just replace those with "a_texCoord".
It looks like maybe you are conflating attribute name suffixes with what texture units are, although the two concepts are not necessarily related. The reason you might want to add number suffixes to your texCoords is if your mesh has multiple UV's for each vertex because it is multi-tetxtured. But really you can use any naming scheme you like. The reason you might want to bind to a unit other than 0 is if you're multi-texturing on a single mesh so you need multiple textures bound at once. So if you actually were multi-texturing, using attribute suffixes that match texture unit numbers might help avoid confusion when you are trying to match UV's to textures in the fragment shader.
ok so it turns out the problem was the vertex shader
They code from here doesnt work.
here is the working shader
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = vec4(1, 1, 1, 1);
v_texCoords = a_texCoord;
gl_Position = u_projTrans * a_position;
}

Getting world position for deferred rendering light pass

I have recently began to build some kind of deferred rendering pipeline for the engine I am working on but I'm stuck at reconstructing the world position from depth. I have looked at quite a few examples where the explain that you need either a world position texture or a depth texture to then use for the correct distance and direction calculation of the light.
My problem is that the so called position texture which assumably is the world position doesn't seem to give me correct data. Therefore I tried to find alternative ways of getting a world position and some have suggested that I should use a depth texture instead but then what?
To make it all more clear this picture shows the textures that I currently have stored:
Position(Top left), Normal(Top right), Diffuse(Bottom left) and Depth(Bottom right).
For the light pass I am trying to use a method which works fine if used in the first pass. When I try the same method for the light pass with the exact same variables it stops working.
Here's my Geometry Vertex Shader:
#version 150
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec4 in_Position;
in vec3 in_Normal;
in vec2 in_TextureCoord;
out vec3 pass_Normals;
out vec4 pass_Position;
out vec2 pass_TextureCoord;
out vec4 pass_Diffuse;
void main(void) {
pass_Position = viewMatrix * modelMatrix * in_Position;
pass_Normals = (viewMatrix * modelMatrix * vec4(in_Normal, 0.0)).xyz;
pass_Diffuse = vec4(1,1,1,1);
gl_Position = projectionMatrix * pass_Position;
}
Geometry Fragment shader:
#version 150 core
uniform sampler2D texture_diffuse;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec4 pass_Position;
in vec3 pass_Normals;
in vec2 pass_TextureCoord;
in vec4 pass_Diffuse;
out vec4 out_Diffuse;
out vec4 out_Position;
out vec4 out_Normals;
void main(void) {
out_Position = pass_Position;
out_Normals = vec4(pass_Normals, 1.0);
out_Diffuse = pass_Diffuse;
}
Light Vertex Shader:
#version 150
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 pass_TextureCoord;
void main( void )
{
gl_Position = in_Position;
pass_TextureCoord = in_TextureCoord;
}
Light Fragment Shader:
#version 150 core
uniform sampler2D texture_Diffuse;
uniform sampler2D texture_Normals;
uniform sampler2D texture_Position;
uniform vec3 cameraPosition;
uniform mat4 viewMatrix;
in vec2 pass_TextureCoord;
out vec4 frag_Color;
void main( void )
{
frag_Color = vec4(1,1,1,1);
vec4 image = texture(texture_Diffuse,pass_TextureCoord);
vec3 position = texture( texture_Position, pass_TextureCoord).rgb;
vec3 normal = texture( texture_Normals, pass_TextureCoord).rgb;
frag_Color = image;
vec3 LightPosition_worldspace = vec3(0,2,0);
vec3 vertexPosition_cameraspace = position;
vec3 EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;
vec3 LightPosition_cameraspace = ( viewMatrix * vec4(LightPosition_worldspace,1)).xyz;
vec3 LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;
vec3 n = normal;
vec3 l = normalize( LightDirection_cameraspace );
float cosTheta = max( dot( n,l ), 0);
float distance = distance(LightPosition_cameraspace, vertexPosition_cameraspace);
frag_Color = vec4((vec3(10,10,10) * cosTheta)/(distance*distance)), 1);
}
And finally, here's the current result:
So My question is if anyone please can explain the result or how I should do to get a correct result. I would also appreciate good resources on the area.
Yes, using the depth buffer to reconstruct position is your best bet. This will significantly cut down on memory bandwidth / storage requirements. Modern hardware is biased towards doing shader calculations rather than memory fetches (this was not always the case), and the instructions necessary to reconstruct position per-fragment will always finish quicker than if you were to fetch the position from a texture with adequate precision. Now, you just have to realize what the hardware depth buffer stores (understand how depth range and perspective distribution work) and you will be good to go.
I do not see any attempt at reconstruction of world/view space position from the depth buffer in the code your question lists. You are just sampling from a buffer that stores the position in view-space. Since you are not performing reconstruction in this example, the problem has to do with sampling the view-space position... can you update your question to include the internal formats of the G-Buffer textures. In particular, are you using a format that can represent negative values (this is necessary to express position, otherwise negative values are clamped to 0).
On a final note, your position is also view-space and not world-space, a trained eye can tell this immediately by the way the colors in your position buffer are black in the lower-left corner. If you want to debug your position/normal, you should bias/scale the sampled colors into the visible range:
([-1.0, 1.0] -> [0.0, 1.0]) // Vec = Vec * 0.5 + 0.5
You may need to do this when you output some of the buffers if you want to store the normal G-Buffer more efficiently (e.g. in an 8-bit fixed-point texture instead of floating-point).

LibGdx shader "no uniform with name..." in phong shader

I'm very new to OpenGL and LibGdx. I started with these tutorials but wanted to apply a phong texture. I've tried to merge a number of examples but am having issues.
I've got a sphere and spinning cube on the center of the screen. I've still got hundreds of things to work out but for the moment, I don't understand why LibGdx is reporting that my uniform material can't be found...
Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: java.lang.IllegalArgumentException: no uniform with name 'uMvpMatrix' in shader
Pixel Shader
I don't believe the Fragment shader is relevant but it's at the bottom in case.
#version 120
uniform mat4 uMvpMatrix;
varying vec3 diffuseColor;
// the diffuse Phong lighting computed in the vertex shader
varying vec3 specularColor;
// the specular Phong lighting computed in the vertex shader
varying vec4 texCoords; // the texture coordinates
void main()
{
vec3 normalDirection =
normalize(gl_NormalMatrix * gl_Normal);
vec3 viewDirection =
-normalize(vec3(gl_ModelViewMatrix * gl_Vertex));
vec3 lightDirection;
float attenuation;
if (0.0 == gl_LightSource[0].position.w)
// directional light?
{
attenuation = 1.0; // no attenuation
lightDirection =
normalize(vec3(gl_LightSource[0].position));
}
else // point light or spotlight (or other kind of light)
{
vec3 vertexToLightSource =
vec3(gl_LightSource[0].position
- gl_ModelViewMatrix * gl_Vertex);
float distance = length(vertexToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(vertexToLightSource);
if (gl_LightSource[0].spotCutoff <= 90.0) // spotlight?
{
float clampedCosine = max(0.0, dot(-lightDirection,
gl_LightSource[0].spotDirection));
if (clampedCosine < gl_LightSource[0].spotCosCutoff)
// outside of spotlight cone?
{
attenuation = 0.0;
}
else
{
attenuation = attenuation * pow(clampedCosine,
gl_LightSource[0].spotExponent);
}
}
}
vec3 ambientLighting = vec3(gl_LightModel.ambient);
// without material color!
vec3 diffuseReflection = attenuation
* vec3(gl_LightSource[0].diffuse)
* max(0.0, dot(normalDirection, lightDirection));
// without material color!
vec3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0)
// light source on the wrong side?
{
specularReflection = vec3(0.0, 0.0, 0.0);
// no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation
* vec3(gl_LightSource[0].specular)
* vec3(gl_FrontMaterial.specular)
* pow(max(0.0, dot(reflect(-lightDirection,
normalDirection), viewDirection)),
gl_FrontMaterial.shininess);
}
diffuseColor = ambientLighting + diffuseReflection;
specularColor = specularReflection;
texCoords = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Setup
shader = new ShaderProgram(vertexShader, fragmentShader);
mesh = Shapes.genCube();
mesh.getVertexAttribute(Usage.Position).alias = "a_position";
Render
...
float aspect = Gdx.graphics.getWidth() / (float) Gdx.graphics.getHeight();
projection.setToProjection(1.0f, 20.0f, 60.0f, aspect);
view.idt().trn(0, 0, -2.0f);
model.setToRotation(axis, angle);
combined.set(projection).mul(view).mul(model);
Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
shader.begin();
shader.setUniformMatrix("uMvpMatrix", combined);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Stack Trace
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:113)
Caused by: java.lang.IllegalArgumentException: no uniform with name 'uMvpMatrix' in shader
at com.badlogic.gdx.graphics.glutils.ShaderProgram.fetchUniformLocation(ShaderProgram.java:283)
at com.badlogic.gdx.graphics.glutils.ShaderProgram.setUniformMatrix(ShaderProgram.java:539)
at com.badlogic.gdx.graphics.glutils.ShaderProgram.setUniformMatrix(ShaderProgram.java:527)
at com.overshare.document.Views.Test.onRender(Test.java:150)
...
Fragment Shader
#ifdef GL_ES
precision mediump float;
#endif
precision mediump float;
varying vec4 v_Color;
void main()
{
gl_FragColor = v_Color;
}
Can someone please tell me what I'm missing?
I ran into something similar. I think because your shader doesn't use the "uMvpMatrix" uniform, it declaration gets optimized out, and so its "not there" when you go to set it. If you change your shader to reference the matrix in some way, you should get farther.
See (indirectly)
Do (Unused) GLSL uniforms/in/out Contribute to Register Pressure?
I believe there are ways of developing and compiling shaders offline, so for a complex shader it may make sense to develop it outside of Libgdx (hopefully you'd get better error messages). Libgdx is just passing the giant string on to the lower layers, it doesn't do much with the shader itself, so there shouldn't be compatibility issues.
Also, problem could be in precision specifier. On device(Nexus S) the next uniform defining will throw the same error:
uniform float yShift;
Using precision specifier solves the problem:
uniform lowp float yShift;
LibGDX allows to check shader compilation and get error log:
if(!shader.isCompiled()){
String log = shader.getLog();
}
Finally, there's flag to ignore shader errors:
ShaderProgram.pedantic = false;

Categories

Resources