I'm using opengl version 2.1, glsl 120, lwjgl 3.
Whenever I call GL20.glUniformMatrix4fv(), it doesn't get recognized by the shader program. After debugging all shaders and programs return no errors.
A cube is drawn to the window(since no projection it appears as a quad).
Shader program that compiles with no errors and works:
#version 120
attribute vec3 position;
uniform mat4 modelView;
uniform mat4 proj;
void main()
{
gl_Position = vec4(position, 1.0); // See how we directly give a vec3 to vec4's constructor
}
Nothing is drawn to window.
Shader program compiles with no errors but nothing shows on the screen:
#version 120
attribute vec3 position;
uniform mat4 modelView;
uniform mat4 proj;
void main()
{
gl_Position = proj * vec4(position, 1.0); // See how we directly give a vec3 to vec4's constructor
}
fragment shader:
#version 120
void main()
{
gl_FragColor = vec4(0.0, 1.0, 1.0, 1.0);
}
Programs always link and compile but the uniform proj doesnt work.
This is the code where uniform data is sent:
GL20.glUseProgram(shaderProgram);
int model = GL20.glGetAttribLocation(shaderProgram, "modelView");//used get attrib here instead of GL20.glGetUniformLocation();
Matrix4f modelView = new Matrix4f().translate(.5f, .5f, .5f);
GL20.glUniformMatrix4fv(model, false, modelView.get(buffer));
int proj = GL20.glGetAttribLocation(shaderProgram, "proj");//and here
Matrix4f projection = new Matrix4f().
perspective((float)Math.toRadians(45), 1.0f, 0.1f, 100f).
lookAt(0.0f, 0.0f, -3.0f, //eye
0.0f, 0.0f, 0.0f, //center
0.0f, 1.0f, 0.0f);//up
GL20.glUniformMatrix4fv(proj, false, projection.get(buffer));
//draw container
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, VBO);
GL11.glDrawArrays(GL_TRIANGLES, 0, 36);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
{updated} put gluseprogram before call to gluniformmatrix4fv
Still nothing is drawn to screen. When Identity matrix is sent to proj then multiplied by vec4 the cube disappears.
Checking:
GL20.glGetShaderiv(vertexID, GL20.GL_ACTIVE_UNIFORMS, success);
in vertex shader returns zero
while checking:
GL20.glGetProgramiv(shaderID, GL20.GL_ACTIVE_UNIFORMS, success);
in shader program returns 1 or 2 when proj and modelView are being used in main method.
[solved update]
was using get attrib location instead of get uniform location.
Uniforms are per program state in OpenGL. All the glUniform*() functions set the uniforms of the currently active program object. You have to call glUseProgram before you can set the uniforms of that very program.
Related
When trying to pass a vec4(of colour data) form a vertex shader to a fragment shader, the resulting output of the fragment shader draws a black vbo.
I have tried changing where i bind the program, using flat and adding more glfwWindowHints. What is quite strange too is there is no compiling or linking errors from the shader and the shader program.
I have also directly tested the fragment shader and found it to work, aswhell as the binding of data and drawing of the mesh.
Rendering:
GL45.glBindVertexArray(vao);
program.bind();
GL45.glEnableVertexAttribArray(0);
GL45.glEnableVertexAttribArray(1);
meshes.forEach(mesh -> mesh.draw(vao));
GL45.glDisableVertexAttribArray(0);
GL45.glDisableVertexAttribArray(1);
program.unbind();
GL45.glBindVertexArray(0);
Drawing the meshes:
bindData(vao);
if(beforeDraw != null)
beforeDraw.call();
GL45.glDrawElements(GL45.GL_TRIANGLES, num_vertecies, GL45.GL_UNSIGNED_INT, 0);
if(afterDraw != null)
afterDraw.call();
Creating ShaderProgram:
/*Called by constructor where it chooses to detach and delete the previous shaders after this is called*/
this.program = glCreateProgram();
for(Shader s : shaders) glAttachShader(program, s.get());
glLinkProgram(program);
glValidateProgram(program);
if(validateProgram(program) == GL_FALSE)
printProgramErrorLog(program);
Creating Shaders:
shader = glCreateShader(getShaderType());
glShaderSource(shader, source);
glCompileShader(shader);
if(validateShader(shader) == GL_FALSE) {
printShaderErrorLog(shader);
delete();
}
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertex;
layout(location = 1) in float data;
out vec4 object_color;
void main(void){
gl_Position = vec4(vertex, 1.0f);
object_color = vec4(0.f, data, 1.0f, 1.0f);
}
Fragment Shader:
#version 330 core
in vec4 object_color;
out vec4 FragColor;
void main(void){
FragColor = object_color;
}
The expected result is a output of an vbo with a colour.
I've run into the strangest problem I've ever had in my shader experience. I created some test shader code (see below) and ran it on a simple texture.
Basically what I was trying to do is make my shader check the color of a fragment, and if it was within a certain range it would color that fragment according to a uniform color variable. The problem I'm having is that my shader does not correctly recognize the color of a fragment. I even went as far as to check if the red portion of the color is equal to one and it always returns true for every fragment. Yet if I use the same shader to draw the original texture it works just fine.
Why is this happening? The shader compiles without any errors what so ever. I feel like I'm missing something obvious...
Code (if you have access to LibGDX you can run this code for yourself).
// Vertex
#version 330
in vec4 a_position;
in vec4 a_color;
in vec2 a_texCoord0;
out vec4 v_color;
out vec2 v_texCoord0;
uniform mat4 u_projTrans;
void main() {
v_color = a_color;
v_texCoord0 = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
// Fragment
#version 330
in vec4 v_color;
in vec2 v_texCoord0;
out vec4 outColor;
uniform vec4 color;
uniform sampler2D u_texture;
void main() {
// This is always true for some reason...
if(v_color.r == 1.0) {
outColor = color;
} else {
// But if I run this code it draws the original texture just fine with the correct colors.
outColor = v_color * texture2D(u_texture, v_texCoord0);
}
}
// Java code.
// Creating sprite, shader and sprite batch.
SpriteBatch batch = new SpriteBatch();
Sprite sprite = new Sprite(new Texture("testTexture.png"));
ShaderProgram shader = new ShaderProgram(Gdx.files.internal("vertex.vert"),
Gdx.files.internal("fragment.frag"));
// Check to see if the shader has logged any errors. Prints nothing.
System.out.println(shader.getLog());
// We start the rendering.
batch.begin();
// We begin the shader so we can load uniforms into it.
shader.begin();
// Set the uniform (works fine).
shader.setUniformf("color", Color.RED);
// We end the shader to tell it we've finished loading uniforms.
shader.end();
// We then tell our renderer to use this shader.
batch.setShader(shader);
// Then we draw our sprite.
sprite.draw(batch);
// And finally we tell our renderer that we're finished drawing.
batch.end();
// Dispose to release resources.
shader.dispose();
batch.dispose();
sprite.getTexture().dispose();
The texture:
You have 2 input colors for your fragment shader: one sent as a vertex attribute and one as a texture. You intend to check the texture color but instead you check for the color value sent as the vertex attribute.
// Vertex
#version 330
in vec4 a_position;
in vec4 a_color; // <--- A color value is passed as a vertex attribute
in vec2 a_texCoord0;
out vec4 v_color;
out vec2 v_texCoord0;
uniform mat4 u_projTrans;
void main() {
v_color = a_color; // <--- you are sending it to fragment shader
v_texCoord0 = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
// Fragment
#version 330
in vec4 v_color; // <--- coming from vertex buffer
in vec2 v_texCoord0;
out vec4 outColor;
uniform vec4 color;
uniform sampler2D u_texture;
void main() {
// This is always true for some reason...
if(v_color.r == 1.0) { // <--- and then checking this vertex attribute color
outColor = color; // instead of what you send from texture
} else {
...
}
}
I'm using java and LWJGL/openGL to create some graphics. For rendering I use the following:
Constructor of RawModel:
public RawModel(int vaoID, int vertexCount, String name){
this.vaoID = vaoID;
this.vertexCount = vertexCount;
this.name = name;
}
Renderer:
public void render(Entity entity, StaticShader shader){
RawModel rawModel = entity.getRaw(active);
GL30.glBindVertexArray(rawModel.getVaoID());
GL20.glEnableVertexAttribArray(0);
GL20.glEnableVertexAttribArray(1);
GL20.glEnableVertexAttribArray(3);
Matrix4f transformationMatrix = Maths.createTransformationMatrix(entity.getPosition(),
entity.getRotX(), entity.getRotY(), entity.getRotZ(), entity.getScale());
shader.loadTransformationMatrix(transformationMatrix);
GL11.glDrawElements(GL11.GL_TRIANGLES, rawModel.getVertexCount(), GL11.GL_UNSIGNED_INT, 0);
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL20.glDisableVertexAttribArray(3);
GL30.glBindVertexArray(0);
}
I'm using GL11.GL_TRIANGLES cuz that's how I can make models' lines show up instead of faces. But when I set color for a vertex, it just colors surrounding lines to the color set, in some cases all of it's lines just take the color of surrounding vertices. How could I make it so that it kind of combines those 2 colors depending on the distance of each vertex and the colors?
Fragment shader:
#version 400 core
in vec3 colour;
in vec2 pass_textureCoords;
out vec4 out_Color;
uniform sampler2D textureSampler;
void main(void){
vec4 textureColour = texture(textureSampler, pass_textureCoords);
//out_Color = texture(textureSampler, pass_textureCoords);
out_Color = vec4(colour, 1.0);
}
Vertex shader:
#version 400 core
in vec3 position;
in vec2 textureCoords;
in int selected;
out vec3 colour;
out vec2 pass_textureCoords;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
void main(void){
gl_Position = projectionMatrix * viewMatrix * transformationMatrix * vec4(position, 1.0);
pass_textureCoords = textureCoords;
if(selected == 1){
colour = vec3(200, 200, 200);
}
else{
colour = vec3(0, 0, 0);
}
gl_BackColor = vec4(colour, 1.0);
}
I'm using GL11.GL_TRIANGLES cuz that's how I can make models' lines show up instead of faces.
Well, GL_TRIANGLES is for rendering triangles, which are faces. If you only want the models' lines, you can use one of the line drawing modes (GL_LINES, GL_LINE_LOOP, GL_LINE_STRIP etc).
However, a better way is to enable
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
, which makes only the outline of the triangles show up.
You can switch it off again with
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
How could I make it so that it kind of combines those 2 colors depending on the distance of each vertex and the colors?
I'm not sure what you mean with this; by default values passed from the vertex shader to the fragment shader are already interpolated across the mesh; so the color a fragment receives already depends on all the vertices' color and distance.
Edit:
In the vertex shader:
if(selected == 1){
colour = vec3(200, 200, 200);
}
I assume you want to assign an RGB value of (200, 200, 200), which is a very light white. However, OpenGL uses floating-point color components in the range 0.0 to 1.0. Values that go above or below this range are clipped. The value of colour is now interpolated across the fragments, which will receive values with components far higher than 1.0. These will be clipped to 1.0, meaning all your fragments appear white.
So, in order to solve this issue, you have to instead use something like
colour = vec3(0.8, 0.8, 0.8);
edit--
I've updated my code after TenFour04s answer but still just shows black.
I've updated my libgdx and its required me to use gl20 which has lead me to make a few changes
most of it works fine except when trying to do texture the mesh. This currently shows surface mesh as black and doesnt show the ground mesh at all. with some changes I can get it to show both surface and ground meshes as black.
I've played around with binding and the order of surTexture.bind and grdTexture.bind and using numbers less than 16 and render and i've got it to use the surface texture as the texture for everything except the surface and ground.
Can anyone see where I might be going wrong with this?
// creating a mesh with maxVertices set to vertices,size*3
groundMesh = new Mesh(true, vertices.size*3, vertices.size*3,
new VertexAttribute(Usage.Position,2,ShaderProgram.POSITION_ATTRIBUTE),
new VertexAttribute(Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE));
groundMesh.setVertices(temp);
short[] indices = new short[vertices.size*2];
for(int i=0;i<vertices.size*2;i++){
indices[i] = (short)i;
}
groundMesh.setIndices(indices);
surfaceMesh = new Mesh(true, vertices.size*3, vertices.size*3,
new VertexAttribute(Usage.Position,3,ShaderProgram.POSITION_ATTRIBUTE),
new VertexAttribute(Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE));
...
grdTexture = new Texture(Gdx.files.internal("data/img/leveltest/ground.png"));
// Gdx.graphics.getGL20().glActiveTexture(GL20.GL_TEXTURE16);
//says that setWrap and SetFilter bind the texture so I thought I might have to set
//activetexture here but does nothing.
grdTexture.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
grdTexture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
surTexture = new Texture(Gdx.files.internal("data/img/leveltest/surface.png"));
// Gdx.graphics.getGL20().glActiveTexture(GL20.GL_TEXTURE17);
surTexture.setWrap(TextureWrap.Repeat, TextureWrap.ClampToEdge);
//TODO change these filters for better quality
surTexture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
drawWorld gets called inside render()
public void drawWorld(SpriteBatch batch,OrthographicCamera camera) {
batch.begin();
batch.setProjectionMatrix(camera.combined);
layers.drawLayers(batch);
if ((spatials != null) && (spatials.size > 0)){
for (int i = 0; i < spatials.size; i++){
spatials.get(i).render(batch);
}
}
batch.end();
drawGround(camera);
}
private void drawGround(OrthographicCamera camera){
shader.begin();
shader.setUniformMatrix("u_projTrans", camera.combined);
grdTexture.bind(0);
shader.setUniformi("u_texture", 0);
//changed GL_TRIANGLES to GL_TRIANGLE_STRIP to render meshes correctly after changing to GL20
groundMesh.render(shader, GL20.GL_TRIANGLE_STRIP);
surTexture.bind(0);
shader.setUniformi("u_texture", 0);
surfaceMesh.render(shader, GL20.GL_TRIANGLE_STRIP);
shader.end();
}
fragment.glsl
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
varying LOWP vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
}
vertex.glsl
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_color.a = v_color.a * (256.0/255.0);
v_texCoords = a_texCoord;
gl_Position = u_projTrans * a_position;
}
In your vertex shader, you are using a_texCoord, but in your mesh constructor, you have effectively named your attributes a_texCoord16 and a_texCoord17 by using ShaderProgram.TEXCOORD_ATTRIBUTE+"16" and ShaderProgram.TEXCOORD_ATTRIBUTE+"17".
Since you are not multi-texturing, I would just replace those with "a_texCoord".
It looks like maybe you are conflating attribute name suffixes with what texture units are, although the two concepts are not necessarily related. The reason you might want to add number suffixes to your texCoords is if your mesh has multiple UV's for each vertex because it is multi-tetxtured. But really you can use any naming scheme you like. The reason you might want to bind to a unit other than 0 is if you're multi-texturing on a single mesh so you need multiple textures bound at once. So if you actually were multi-texturing, using attribute suffixes that match texture unit numbers might help avoid confusion when you are trying to match UV's to textures in the fragment shader.
ok so it turns out the problem was the vertex shader
They code from here doesnt work.
here is the working shader
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = vec4(1, 1, 1, 1);
v_texCoords = a_texCoord;
gl_Position = u_projTrans * a_position;
}
I'm very new to OpenGL and LibGdx. I started with these tutorials but wanted to apply a phong texture. I've tried to merge a number of examples but am having issues.
I've got a sphere and spinning cube on the center of the screen. I've still got hundreds of things to work out but for the moment, I don't understand why LibGdx is reporting that my uniform material can't be found...
Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: java.lang.IllegalArgumentException: no uniform with name 'uMvpMatrix' in shader
Pixel Shader
I don't believe the Fragment shader is relevant but it's at the bottom in case.
#version 120
uniform mat4 uMvpMatrix;
varying vec3 diffuseColor;
// the diffuse Phong lighting computed in the vertex shader
varying vec3 specularColor;
// the specular Phong lighting computed in the vertex shader
varying vec4 texCoords; // the texture coordinates
void main()
{
vec3 normalDirection =
normalize(gl_NormalMatrix * gl_Normal);
vec3 viewDirection =
-normalize(vec3(gl_ModelViewMatrix * gl_Vertex));
vec3 lightDirection;
float attenuation;
if (0.0 == gl_LightSource[0].position.w)
// directional light?
{
attenuation = 1.0; // no attenuation
lightDirection =
normalize(vec3(gl_LightSource[0].position));
}
else // point light or spotlight (or other kind of light)
{
vec3 vertexToLightSource =
vec3(gl_LightSource[0].position
- gl_ModelViewMatrix * gl_Vertex);
float distance = length(vertexToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(vertexToLightSource);
if (gl_LightSource[0].spotCutoff <= 90.0) // spotlight?
{
float clampedCosine = max(0.0, dot(-lightDirection,
gl_LightSource[0].spotDirection));
if (clampedCosine < gl_LightSource[0].spotCosCutoff)
// outside of spotlight cone?
{
attenuation = 0.0;
}
else
{
attenuation = attenuation * pow(clampedCosine,
gl_LightSource[0].spotExponent);
}
}
}
vec3 ambientLighting = vec3(gl_LightModel.ambient);
// without material color!
vec3 diffuseReflection = attenuation
* vec3(gl_LightSource[0].diffuse)
* max(0.0, dot(normalDirection, lightDirection));
// without material color!
vec3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0)
// light source on the wrong side?
{
specularReflection = vec3(0.0, 0.0, 0.0);
// no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation
* vec3(gl_LightSource[0].specular)
* vec3(gl_FrontMaterial.specular)
* pow(max(0.0, dot(reflect(-lightDirection,
normalDirection), viewDirection)),
gl_FrontMaterial.shininess);
}
diffuseColor = ambientLighting + diffuseReflection;
specularColor = specularReflection;
texCoords = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Setup
shader = new ShaderProgram(vertexShader, fragmentShader);
mesh = Shapes.genCube();
mesh.getVertexAttribute(Usage.Position).alias = "a_position";
Render
...
float aspect = Gdx.graphics.getWidth() / (float) Gdx.graphics.getHeight();
projection.setToProjection(1.0f, 20.0f, 60.0f, aspect);
view.idt().trn(0, 0, -2.0f);
model.setToRotation(axis, angle);
combined.set(projection).mul(view).mul(model);
Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
shader.begin();
shader.setUniformMatrix("uMvpMatrix", combined);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Stack Trace
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:113)
Caused by: java.lang.IllegalArgumentException: no uniform with name 'uMvpMatrix' in shader
at com.badlogic.gdx.graphics.glutils.ShaderProgram.fetchUniformLocation(ShaderProgram.java:283)
at com.badlogic.gdx.graphics.glutils.ShaderProgram.setUniformMatrix(ShaderProgram.java:539)
at com.badlogic.gdx.graphics.glutils.ShaderProgram.setUniformMatrix(ShaderProgram.java:527)
at com.overshare.document.Views.Test.onRender(Test.java:150)
...
Fragment Shader
#ifdef GL_ES
precision mediump float;
#endif
precision mediump float;
varying vec4 v_Color;
void main()
{
gl_FragColor = v_Color;
}
Can someone please tell me what I'm missing?
I ran into something similar. I think because your shader doesn't use the "uMvpMatrix" uniform, it declaration gets optimized out, and so its "not there" when you go to set it. If you change your shader to reference the matrix in some way, you should get farther.
See (indirectly)
Do (Unused) GLSL uniforms/in/out Contribute to Register Pressure?
I believe there are ways of developing and compiling shaders offline, so for a complex shader it may make sense to develop it outside of Libgdx (hopefully you'd get better error messages). Libgdx is just passing the giant string on to the lower layers, it doesn't do much with the shader itself, so there shouldn't be compatibility issues.
Also, problem could be in precision specifier. On device(Nexus S) the next uniform defining will throw the same error:
uniform float yShift;
Using precision specifier solves the problem:
uniform lowp float yShift;
LibGDX allows to check shader compilation and get error log:
if(!shader.isCompiled()){
String log = shader.getLog();
}
Finally, there's flag to ignore shader errors:
ShaderProgram.pedantic = false;