I'm very new to OpenGL and LibGdx. I started with these tutorials but wanted to apply a phong texture. I've tried to merge a number of examples but am having issues.
I've got a sphere and spinning cube on the center of the screen. I've still got hundreds of things to work out but for the moment, I don't understand why LibGdx is reporting that my uniform material can't be found...
Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: java.lang.IllegalArgumentException: no uniform with name 'uMvpMatrix' in shader
Pixel Shader
I don't believe the Fragment shader is relevant but it's at the bottom in case.
#version 120
uniform mat4 uMvpMatrix;
varying vec3 diffuseColor;
// the diffuse Phong lighting computed in the vertex shader
varying vec3 specularColor;
// the specular Phong lighting computed in the vertex shader
varying vec4 texCoords; // the texture coordinates
void main()
{
vec3 normalDirection =
normalize(gl_NormalMatrix * gl_Normal);
vec3 viewDirection =
-normalize(vec3(gl_ModelViewMatrix * gl_Vertex));
vec3 lightDirection;
float attenuation;
if (0.0 == gl_LightSource[0].position.w)
// directional light?
{
attenuation = 1.0; // no attenuation
lightDirection =
normalize(vec3(gl_LightSource[0].position));
}
else // point light or spotlight (or other kind of light)
{
vec3 vertexToLightSource =
vec3(gl_LightSource[0].position
- gl_ModelViewMatrix * gl_Vertex);
float distance = length(vertexToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(vertexToLightSource);
if (gl_LightSource[0].spotCutoff <= 90.0) // spotlight?
{
float clampedCosine = max(0.0, dot(-lightDirection,
gl_LightSource[0].spotDirection));
if (clampedCosine < gl_LightSource[0].spotCosCutoff)
// outside of spotlight cone?
{
attenuation = 0.0;
}
else
{
attenuation = attenuation * pow(clampedCosine,
gl_LightSource[0].spotExponent);
}
}
}
vec3 ambientLighting = vec3(gl_LightModel.ambient);
// without material color!
vec3 diffuseReflection = attenuation
* vec3(gl_LightSource[0].diffuse)
* max(0.0, dot(normalDirection, lightDirection));
// without material color!
vec3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0)
// light source on the wrong side?
{
specularReflection = vec3(0.0, 0.0, 0.0);
// no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation
* vec3(gl_LightSource[0].specular)
* vec3(gl_FrontMaterial.specular)
* pow(max(0.0, dot(reflect(-lightDirection,
normalDirection), viewDirection)),
gl_FrontMaterial.shininess);
}
diffuseColor = ambientLighting + diffuseReflection;
specularColor = specularReflection;
texCoords = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Setup
shader = new ShaderProgram(vertexShader, fragmentShader);
mesh = Shapes.genCube();
mesh.getVertexAttribute(Usage.Position).alias = "a_position";
Render
...
float aspect = Gdx.graphics.getWidth() / (float) Gdx.graphics.getHeight();
projection.setToProjection(1.0f, 20.0f, 60.0f, aspect);
view.idt().trn(0, 0, -2.0f);
model.setToRotation(axis, angle);
combined.set(projection).mul(view).mul(model);
Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
shader.begin();
shader.setUniformMatrix("uMvpMatrix", combined);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Stack Trace
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:113)
Caused by: java.lang.IllegalArgumentException: no uniform with name 'uMvpMatrix' in shader
at com.badlogic.gdx.graphics.glutils.ShaderProgram.fetchUniformLocation(ShaderProgram.java:283)
at com.badlogic.gdx.graphics.glutils.ShaderProgram.setUniformMatrix(ShaderProgram.java:539)
at com.badlogic.gdx.graphics.glutils.ShaderProgram.setUniformMatrix(ShaderProgram.java:527)
at com.overshare.document.Views.Test.onRender(Test.java:150)
...
Fragment Shader
#ifdef GL_ES
precision mediump float;
#endif
precision mediump float;
varying vec4 v_Color;
void main()
{
gl_FragColor = v_Color;
}
Can someone please tell me what I'm missing?
I ran into something similar. I think because your shader doesn't use the "uMvpMatrix" uniform, it declaration gets optimized out, and so its "not there" when you go to set it. If you change your shader to reference the matrix in some way, you should get farther.
See (indirectly)
Do (Unused) GLSL uniforms/in/out Contribute to Register Pressure?
I believe there are ways of developing and compiling shaders offline, so for a complex shader it may make sense to develop it outside of Libgdx (hopefully you'd get better error messages). Libgdx is just passing the giant string on to the lower layers, it doesn't do much with the shader itself, so there shouldn't be compatibility issues.
Also, problem could be in precision specifier. On device(Nexus S) the next uniform defining will throw the same error:
uniform float yShift;
Using precision specifier solves the problem:
uniform lowp float yShift;
LibGDX allows to check shader compilation and get error log:
if(!shader.isCompiled()){
String log = shader.getLog();
}
Finally, there's flag to ignore shader errors:
ShaderProgram.pedantic = false;
Related
I need to compute Normals for each Triangle Face (not for each vertex) using Opengl ES 2.0 in Android. But I can't pass attribute in the Fragment Shader directly.
Found one solution: Repeat vertices for each Triangle, and pass Triangle face Normal as an Attribute in Vertex Shader.
But I don't want to duplicate the vertices. I am drawing triangle using Vertex Indices.
So, a vertex is shared by more than one triangle, then how should I compute Triangle face Normals.
p.s. I am a newbie in opengl.
The easiest solution is just to duplicate vertices. The vertex shader is very rarely a bottleneck. I do not know your specific needs, however, there are cirumstances when duplicating a vertex is not a good solution. For example, if the mesh is skinned and animated that means a lot of computation happens in the vertex shader. Another case is when the mesh is animated in some weird way in the vertex shader and you have to recompute the normals. Clearly, you cannot compute per face normals in the vertex shader. You could do that in the geometry shader, but we do not have one in OpenGL ES 2.0. However, there is a simple solution - compute normals in fragment shader! So, if duplication of vertices does not work for you, here is the solution:
We will need an OpenGL extension - standard_derivatives, which is widely supported, but you will still need to check if it is supported on the device before running the code. To enable the extension, you will have to add the following line to the fragment shader before it's code:
#extension GL_OES_standard_derivatives : enable
We will need a varying variable for the vertex position in the world coordinates. It should be computed in the vertex shader and how it is done depends much on your shader. It is used for many needs, so you may already compute it in your vertex shader. So let's assume, that we have this line in the fragment shader:
varying vec3 positionWorld;
We will need a view matrix of the camera. It is possible that you are already passing one to the fragment shader. Let's assume that we have this uniform in the fragment shader:
uniform mat4 viewMatrix;
Now, we are going to compute the normal. First, we compute a normal in view space and then transform into the worldspace. To compute normal in the viewspace, we use derivative functions:
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
Here a derivative of position is taken with respect to x and y coordinate in screen space. That means that we have two vectors that are in lay in the surface plane. To get normal to the surface we do a cross product. Sure, the result is not a unit vector, so we need also to normalize it.
The last step is to compute normal in the worldspace. View matrix applies a transformation from world space to view space. One could think that we need to compute the inverse of it since we need to go from view space to world space, but because view matrix is orthonormal, the transpose of that matrix is also its inverse, so the code will be:
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
To make life easier, we can wrap everything into a function:
vec3 ReconstructNormal(vec3 positionWorldSpace)
{
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
return normalWorldSpace;
}
Now we have a reconstructed normal in world space. Below, just a simple example, why this can be very useful. Note, that since it uses WebGL, it is also pretty much OpenGL ES 2.0 compatible.
var container;
var camera, scene, renderer;
var mesh;
var uniforms;
var clock = new THREE.Clock();
init();
animate();
function init() {
container = document.getElementById('container');
camera = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 0.1, 100);
camera.position.z = 0.6;
camera.position.y = 0.2;
camera.rotation.x = -0.45;
scene = new THREE.Scene();
var boxGeometry = new THREE.PlaneGeometry(0.75, 0.75, 32, 32);
var heightMap = THREE.ImageUtils.loadTexture("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAFiUAABYlAUlSJPAAAAeeSURBVFhHTdfFq1VfFAfwfbzP7q5ndwcm2IqKgQoiOhQFQXAgDpxecObAP0ARdGJMdGDhwAILO7C7uzvv73wWHPltuLx7z9l7rfWNtc552ZQpUypt27ZN7dq1S/9f7969S1evXk116tRJtWrVSj9//kzfv39PX79+jfvdu3dPpVIp3b9/P3Xo0CH1798/tWrVKjVs2DBt3749vX//PnXr1i3VqFEj9jVq1Ci9efMm3blzJ7Vp0yY1adIk/fr1K2Xz5s2r2CjRy5cv42/r1q3Tnz9/0smTJ6MQh+vVq5du3LgRwSSsrq5OnTp1Sq9fv05fvnyJ38OHD0+7du1K+/bti8Jq166dfvz4EQU7v2TJkoh35cqVADN37txU1bJly0hm+VskaNasWXr+/HkEUe2AAQNSnz590ufPn1OLFi3Sx48f0+3btyO5fUePHo2CJ0yYkJo2bRpgqqqqIq4lzsaNG/8xaCkqGJDYjbdv36aaNWtGkKlTpwadEBTyPHz4MB0/fjw9efIknTp1Kn5bM2fODMqdlUjiFy9epAYNGsR9zCnYck0u+6KA5cuXVz59+hQXX716FbTWrVs3jR8/PnTt169f2rJlS9q2bVsEcNCi56NHj2IvyTAE+ePHj8Mv1u/fv0PSW7duhYwF01h2RsHZ4sWLw4SoVZ1No0ePjorpK8izZ8/+oejRo0dq37590P306dPYL6nCly5dGqyRD5uNGzf+J5Gk58+fj+Lu3bsXHsBsKa+6rHLJaevQgQMHIgGKbt68GYV8+/YtdBVM5c2bN093794NFnSE89euXYvvf//+Tbxlr2u6yHfxFaczLl26FEbNunTpUhk8eHAaMmRIOJSW6NUZUEvKB5IVa+DAgWncuHERvH79+oHGXmcZFgs+FpYktQ8b2OzVq1dI53wp17n84cOH0B/1c+bMiWIk6dixYxSjvzEjETTMh2IsYO/BgweRHAuoJomCJVq0aFHq2bNn9P/FixdDUrIxKtlKq1atKgtq8+HDhyOhBJs3b46gDEozruYBByWlJbSuW8ypKEEZbMyYMWnatGnxe/bs2SGj8zoIEKDFz9atW1dRoemltVRIQ0msQYMGxV9ONrlobpLpDguNCu7du3cwggVzBOUkVfzIkSMDGIDXr1+PYu0HvJQjL2sRSFUkuRvah0sVhTKGLFAzUWEwaEzBYujwjFiuQ01rPoAYesAUpJMUm61evboisCQuQLhnz55/wwJtClQxYxWTU3KLsxXgnsS6BpBiovouJqOPGjUq/GI/QwIYg0ilKmLCNWvWRKUYUIDrqtdeJqT29JAih6SYcV4y5yR2DgAsYKpz586xjyx8Ip4zfme5xhUobUIvhJI5LKilUnMAU5s2bUpHjhyJBJZkffv2TQsXLgxvSEhv0ulzC1rJ0C85mbW2XKU8cZluWocEivAdza6jmpkEKbrCb0uRCrAUp6hZs2bF044xtZ72dj3LstjP5L67RopS/oQr63HjGE2VSiUOqVYxKNVqdPRhKMbq2rVrJMUMUyoacmZ0HxMSug89YyteLuPcHvKGCaGlH00dKGZ10QkGEjQWlhiPhgzlGWLSKdp+Le2ae5gS234GFBsDpijmgCvllZXR5YJhxFyQCwYxrQREm6ECAZbco+HYsWNjJqDTnt27d8fEwwBAihfPuwPGMIExw4qpS/mXsocLFFpNVaq0UQIHtRYkPsxqL4Pmz5H4LjnjOnfs2LF/b0H+8g0wigGAAXnkwoULYdRSfrhMe4FpjWp6QsOx0Jpwng/QFAUKcOLEifjus3///kCGZtT7bi/UZCMxf/Ca/TqPjNnEiRMrKGIgzvRxoxjFWgxaBaLNfc91j9PipbUwnTZkLOMWYtrzku/YIqX9ZoHCmDJbuXJlxWFoVeqANXny5PiLdgEFGDFiRDp06FCg8c5g2Y9qUhTUkwPdEENrDRs2LJ4FJOEznUHebO3atRUDA21QM6MAPgynALoxFy/Q2D7f7ZWUhBZ2UKx4OpMBelLw06RJkyIPBrFw+vTpVMo1LqPDDQm9C+YvqoFSO27dujXt3bs3ng9aS1D7+EJxinAeEwrTFZ4dUGJAUQrUzszOI86KxV/ZggULKtB6WHC+YKqmt5dRSM+cORMIeUVy+/Uyr/CBYJiQLP9HJ15gCo8oBAOSkxEoIMQyM7INGzbEW7HFB2YCGplFN+hpKMkgqUDYcoa+06dPD13piQGv85KuX78+nhnFdFSgj9gex2YAk2b5W0s8jpkINTZLzNG+Q2mj32iGwBBCP5mgNVpNOoigO3fuXPQ4FgojAiGOLlHk0KFD4z0y/jFxiCksdKFZzxfz3JLQHkm9sDKVB5QgHL1ixYo4y9DenhQAGFnd12FAzJ8/P+IfPHgwnT17NpXyqstQ0Jx5zHSoTTeUFg8nlF++fDkQGiJ0J5l79kLpRVZg5739OiOGxM6ZJ87633HHjh0x9Er5y2PZTYZBPRSqdojZvFCqmAcgkojmPgaL/xfpumzZsvjN8SYqnwCEQei1pt9A7Ny5M4CZrlV0Qa8ukNwmfz1gFKFKukPqugQSksB0dI3RdMuMGTNCazJoNxK6rztIx/liGWgYr66uTv8BU33Si9zKcpYAAAAASUVORK5CYII=");
heightMap.wrapT = heightMap.wrapS = THREE.RepeatWrapping;
uniforms = {u_time: {type: "f", value: 0.0 }, u_heightMap: {type: "t",value:heightMap} };
var material = new THREE.ShaderMaterial({
uniforms: uniforms,
side: THREE.DoubleSide,
wireframe: false,
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragment_shader').textContent
});
mesh = new THREE.Mesh(boxGeometry, material);
mesh.rotation.x = 3.14 / 2.0;
scene.add(mesh);
renderer = new THREE.WebGLRenderer();
renderer.setClearColor( 0x000000, 1 );
container.appendChild(renderer.domElement);
onWindowResize();
window.addEventListener('resize', onWindowResize, false);
}
function onWindowResize(event) {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
var delta = clock.getDelta();
uniforms.u_time.value += delta;
//mesh.rotation.z += delta * 0.5;
renderer.render(scene, camera);
}
body { margin: 0px; overflow: hidden; }
<script src="https://threejs.org/build/three.min.js"></script>
<div id="container"></div>
<script id="fragment_shader" type="x-shader/x-fragment">
#extension GL_OES_standard_derivatives : enable
varying vec3 positionWorld; // position of vertex in world coordinates
vec3 ReconstructNormal(vec3 positionWorldSpace)
{
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
return normalWorldSpace;
}
// Just some example of using a normal. Here we do a really simple shading
void main( void )
{
vec3 lightDir = normalize(vec3(1.0, 1.0, 1.0));
vec3 normal = ReconstructNormal(positionWorld);
float diffuse = max(dot(lightDir, normal), 0.0);
vec3 albedo = vec3(0.2, 0.4, 0.7);
gl_FragColor = vec4(albedo * diffuse, 1.0);
}
</script>
<script id="vertexShader" type="x-shader/x-vertex">
uniform lowp sampler2D u_heightMap;
uniform float u_time;
varying vec3 positionWorld;
// Example of vertex shader that moves vertices
void main()
{
vec3 pos = position;
vec2 offset1 = vec2(1.0, 0.5) * u_time * 0.01;
vec2 offset2 = vec2(0.5, 1.0) * u_time * 0.01;
float hight1 = texture2D(u_heightMap, uv + offset1).r * 0.02;
float hight2 = texture2D(u_heightMap, uv + offset2).r * 0.02;
pos.z += hight1 + hight2;
vec4 mvPosition = modelViewMatrix * vec4( pos, 1.0 );
positionWorld = mvPosition.xyz;
gl_Position = projectionMatrix * mvPosition;
}
</script>
I'm using OpenGL (4.5 core, with LWJGL 3.0.0 build 90) and I noticed some artifacts on textures using GL_REPEAT wrap mode with a high amount of repetitions:
What causes this, and how can I fix it (if I can) ?
Here, the plane's size is 100x100, and the UVs are 10000x10000. This screenshot is really really close to it (from farther, the texture is so small that we just see a flat gray color), the near plane is at 0.0001 and the far plane at 10.
I'm not sure if the problem is in the Depth Buffer since the OpenGL default one has a really high precision at closer distances.
(EDIT: I'm thinking of a floating point error on texture coordinates, but I'm not sure)
Here's my shader (I'm using deferred rendering, and the texture sampling is in the geometry pass, so I give the geometry pass shader only).
Vertex shader:
#version 450 core
uniform mat4 projViewModel;
uniform mat4 viewModel;
uniform mat3 normalView;
in vec3 normal_model;
in vec3 position_model;
in vec2 uv;
in vec2 uv2;
out vec3 pass_position_view;
out vec3 pass_normal_view;
out vec2 pass_uv;
out vec2 pass_uv2;
void main(){
pass_position_view = (viewModel * vec4(position_model, 1.0)).xyz;
pass_normal_view = normalView * normal_model;
pass_uv = uv;
pass_uv2 = uv2;
gl_Position = projViewModel * vec4(position_model, 1.0);
}
Fragment shader:
#version 450 core
struct Material {
sampler2D diffuseTexture;
sampler2D specularTexture;
vec3 diffuseColor;
float uvScaling;
float shininess;
float specularIntensity;
bool hasDiffuseTexture;
bool hasSpecularTexture;
bool faceSideNormalCorrection;
};
uniform Material material;
in vec3 pass_position_view;
in vec3 pass_normal_view;
in vec2 pass_uv;
in vec2 pass_uv2;
layout(location = 0) out vec4 out_diffuse;
layout(location = 1) out vec4 out_position;
layout(location = 2) out vec4 out_normal;
void main(){
vec4 diffuseTextureColor = vec4(1.0);
if(material.hasDiffuseTexture){
diffuseTextureColor = texture(material.diffuseTexture, pass_uv * material.uvScaling);
}
float specularTextureIntensity = 1.0;
if(material.hasSpecularTexture){
specularTextureIntensity = texture(material.specularTexture, pass_uv * material.uvScaling).x;
}
vec3 fragNormal = pass_normal_view;
if(material.faceSideNormalCorrection && !gl_FrontFacing){
fragNormal = -fragNormal;
}
out_diffuse = vec4(diffuseTextureColor.rgb * material.diffuseColor, material.shininess);
out_position = vec4(pass_position_view, 1.0); // Must be 1.0 on the alpha -> 0.0 = sky
out_normal = vec4(fragNormal, material.specularIntensity * specularTextureIntensity);
}
Yes, I know, the eye space position is useless in the G-Buffer since you can compute it later from the depth buffer. I just did this for now but it's temporary.
Also, if anything in my shaders is deprecated or a bad practice, it would be cool if you tell me what to do instead ! Thanks !
Additionnal infos (most of them useless I think):
Camera: FOV = 70°, Ratio = 16/9, Near = 0.0001, Far = 10
OpenGL: Major = 4, Minor = 5, Profile = Core
Texture: InternalFormat = GL_RGBA, Filters = Anisotropic, Trilinear
Hardware: GPU = NVIDIA GeForce GTX 970, CPU = Interl(R) Core(TM) i7-4790K CPU # 4.00GHz, Memory = 16.00 GB RAM (15.94 GB usable), Screen = 1920 x 1080 # 144Hz
Driver: GeForce Game Ready Driver V368.69 (release: 6 July 2016)
This is most likely due to floating-point imprecisions created during rasterization (interpolation, perspective correction) and worsened by the normalization in the fragment shader to fetch the correct texels.
But this is also a problem with mipmaps : to calculate which level to use, the UV of adjacent pixels are retrieved to know if the texture is stretched or compressed on screen. Because of the imprecisions, adjacent pixels share the same UV, thus the differences between them (or the 'partial derivatives') are null. This makes the texture() function sample the lowest mipmap level on those identical pixels, thus creating discontinuities.
I have a project I'm working on in libGDX. I'm running tests on a distance field font and I've run into issues while compiling the shader on my Android phone (Galaxy Core 2 4.4.2). When deployed on my phone I get errors while the desktop app works fine (mostly - I'll get to that).
I'll take you through what I've been trying.
I want to be able to enable and disable having a font border during run time, and I can do this fine on the desktop app using the following shader and methods.
.frag:
#ifdef GL_ES
precision mediump float;
#else
#define LOWP
#endif
uniform sampler2D u_texture;
uniform float u_lower;
uniform float u_upper;
varying vec4 v_color;
uniform vec4 u_outlineColor;
uniform float u_enableOutline;
varying vec2 v_texCoord;
const float smoothing = 1.0/12.0;
const float outlineWidth = 3.0/12.0; //will need to be tweaked
const float outerEdgeCenter = 0.5 - outlineWidth; //for optimizing below calculation
void main() {
float distance = texture2D(u_texture, v_texCoord).a;
if (u_enableOutline > 0){
float alpha = smoothstep(outerEdgeCenter - smoothing, outerEdgeCenter + smoothing, distance);//Bigger to accomodate outline
float border = smoothstep(0.45 - smoothing, 0.55 + smoothing, distance);
gl_FragColor = vec4( mix(u_outlineColor.rgb, v_color.rgb, border), alpha );
}
else{
float alpha = smoothstep(0.5 - smoothing, 0.5 + smoothing, distance);
gl_FragColor = vec4(v_color.rgb, alpha);
}
}
.vert:
uniform mat4 u_projTrans;
attribute vec4 a_position;
attribute vec2 a_texCoord0;
attribute vec4 a_color;
varying vec4 v_color;
varying vec2 v_texCoord;
void main() {
gl_Position = u_projTrans * a_position;
v_texCoord = a_texCoord0;
v_color = a_color;
}
With the distance font method to enable / disable the outline being:
public void enableOutline(float enable) {
ENABLE_OUTLINE = enable;
}
Where ENABLE_OUTLINE is passed to the shader by
distanceFieldShader.setUniformf("u_enableOutline", ENABLE_OUTLINE);
In this set up, running on my phone gives the following error:
"cannot compare float to int"
referencing this line in the .frag
if (u_enableOutline > 0){
Fair enough I say, so I change the data type like so:
uniform int u_enableOutline;
And the method to pass through an int:
public void enableOutline(int enable) {
ENABLE_OUTLINE = enable;
}
BUT there is no way to pass an int to the shader (which is why I chose to use floats, see this image: http://imgur.com/nVTN12i) and because of this my method to enable the outline doesn't work due to mixing up data types.
So my question is: can I get around this somehow so that I can enable and disable a border on my phone given these constraints?
It sounds like the API you are using does not offer you the possibility to use bool and int as Uniforms. The solution of using floats to circumvent this seems to be a good idea.
In your example, the problem you are having is because, unlike C and java, the GLSL compiler does not implicitly convert ints to floats.
what you need to do, is tell the compiler the type of your "0".
By using the syntax, 0.0, the compiler knows that your constant is a float and not an integer.
if (u_enableOutline > 0.0){
should fix your problem
edit--
I've updated my code after TenFour04s answer but still just shows black.
I've updated my libgdx and its required me to use gl20 which has lead me to make a few changes
most of it works fine except when trying to do texture the mesh. This currently shows surface mesh as black and doesnt show the ground mesh at all. with some changes I can get it to show both surface and ground meshes as black.
I've played around with binding and the order of surTexture.bind and grdTexture.bind and using numbers less than 16 and render and i've got it to use the surface texture as the texture for everything except the surface and ground.
Can anyone see where I might be going wrong with this?
// creating a mesh with maxVertices set to vertices,size*3
groundMesh = new Mesh(true, vertices.size*3, vertices.size*3,
new VertexAttribute(Usage.Position,2,ShaderProgram.POSITION_ATTRIBUTE),
new VertexAttribute(Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE));
groundMesh.setVertices(temp);
short[] indices = new short[vertices.size*2];
for(int i=0;i<vertices.size*2;i++){
indices[i] = (short)i;
}
groundMesh.setIndices(indices);
surfaceMesh = new Mesh(true, vertices.size*3, vertices.size*3,
new VertexAttribute(Usage.Position,3,ShaderProgram.POSITION_ATTRIBUTE),
new VertexAttribute(Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE));
...
grdTexture = new Texture(Gdx.files.internal("data/img/leveltest/ground.png"));
// Gdx.graphics.getGL20().glActiveTexture(GL20.GL_TEXTURE16);
//says that setWrap and SetFilter bind the texture so I thought I might have to set
//activetexture here but does nothing.
grdTexture.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
grdTexture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
surTexture = new Texture(Gdx.files.internal("data/img/leveltest/surface.png"));
// Gdx.graphics.getGL20().glActiveTexture(GL20.GL_TEXTURE17);
surTexture.setWrap(TextureWrap.Repeat, TextureWrap.ClampToEdge);
//TODO change these filters for better quality
surTexture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
drawWorld gets called inside render()
public void drawWorld(SpriteBatch batch,OrthographicCamera camera) {
batch.begin();
batch.setProjectionMatrix(camera.combined);
layers.drawLayers(batch);
if ((spatials != null) && (spatials.size > 0)){
for (int i = 0; i < spatials.size; i++){
spatials.get(i).render(batch);
}
}
batch.end();
drawGround(camera);
}
private void drawGround(OrthographicCamera camera){
shader.begin();
shader.setUniformMatrix("u_projTrans", camera.combined);
grdTexture.bind(0);
shader.setUniformi("u_texture", 0);
//changed GL_TRIANGLES to GL_TRIANGLE_STRIP to render meshes correctly after changing to GL20
groundMesh.render(shader, GL20.GL_TRIANGLE_STRIP);
surTexture.bind(0);
shader.setUniformi("u_texture", 0);
surfaceMesh.render(shader, GL20.GL_TRIANGLE_STRIP);
shader.end();
}
fragment.glsl
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
varying LOWP vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
}
vertex.glsl
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_color.a = v_color.a * (256.0/255.0);
v_texCoords = a_texCoord;
gl_Position = u_projTrans * a_position;
}
In your vertex shader, you are using a_texCoord, but in your mesh constructor, you have effectively named your attributes a_texCoord16 and a_texCoord17 by using ShaderProgram.TEXCOORD_ATTRIBUTE+"16" and ShaderProgram.TEXCOORD_ATTRIBUTE+"17".
Since you are not multi-texturing, I would just replace those with "a_texCoord".
It looks like maybe you are conflating attribute name suffixes with what texture units are, although the two concepts are not necessarily related. The reason you might want to add number suffixes to your texCoords is if your mesh has multiple UV's for each vertex because it is multi-tetxtured. But really you can use any naming scheme you like. The reason you might want to bind to a unit other than 0 is if you're multi-texturing on a single mesh so you need multiple textures bound at once. So if you actually were multi-texturing, using attribute suffixes that match texture unit numbers might help avoid confusion when you are trying to match UV's to textures in the fragment shader.
ok so it turns out the problem was the vertex shader
They code from here doesnt work.
here is the working shader
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = vec4(1, 1, 1, 1);
v_texCoords = a_texCoord;
gl_Position = u_projTrans * a_position;
}
I have recently began to build some kind of deferred rendering pipeline for the engine I am working on but I'm stuck at reconstructing the world position from depth. I have looked at quite a few examples where the explain that you need either a world position texture or a depth texture to then use for the correct distance and direction calculation of the light.
My problem is that the so called position texture which assumably is the world position doesn't seem to give me correct data. Therefore I tried to find alternative ways of getting a world position and some have suggested that I should use a depth texture instead but then what?
To make it all more clear this picture shows the textures that I currently have stored:
Position(Top left), Normal(Top right), Diffuse(Bottom left) and Depth(Bottom right).
For the light pass I am trying to use a method which works fine if used in the first pass. When I try the same method for the light pass with the exact same variables it stops working.
Here's my Geometry Vertex Shader:
#version 150
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec4 in_Position;
in vec3 in_Normal;
in vec2 in_TextureCoord;
out vec3 pass_Normals;
out vec4 pass_Position;
out vec2 pass_TextureCoord;
out vec4 pass_Diffuse;
void main(void) {
pass_Position = viewMatrix * modelMatrix * in_Position;
pass_Normals = (viewMatrix * modelMatrix * vec4(in_Normal, 0.0)).xyz;
pass_Diffuse = vec4(1,1,1,1);
gl_Position = projectionMatrix * pass_Position;
}
Geometry Fragment shader:
#version 150 core
uniform sampler2D texture_diffuse;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec4 pass_Position;
in vec3 pass_Normals;
in vec2 pass_TextureCoord;
in vec4 pass_Diffuse;
out vec4 out_Diffuse;
out vec4 out_Position;
out vec4 out_Normals;
void main(void) {
out_Position = pass_Position;
out_Normals = vec4(pass_Normals, 1.0);
out_Diffuse = pass_Diffuse;
}
Light Vertex Shader:
#version 150
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 pass_TextureCoord;
void main( void )
{
gl_Position = in_Position;
pass_TextureCoord = in_TextureCoord;
}
Light Fragment Shader:
#version 150 core
uniform sampler2D texture_Diffuse;
uniform sampler2D texture_Normals;
uniform sampler2D texture_Position;
uniform vec3 cameraPosition;
uniform mat4 viewMatrix;
in vec2 pass_TextureCoord;
out vec4 frag_Color;
void main( void )
{
frag_Color = vec4(1,1,1,1);
vec4 image = texture(texture_Diffuse,pass_TextureCoord);
vec3 position = texture( texture_Position, pass_TextureCoord).rgb;
vec3 normal = texture( texture_Normals, pass_TextureCoord).rgb;
frag_Color = image;
vec3 LightPosition_worldspace = vec3(0,2,0);
vec3 vertexPosition_cameraspace = position;
vec3 EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;
vec3 LightPosition_cameraspace = ( viewMatrix * vec4(LightPosition_worldspace,1)).xyz;
vec3 LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;
vec3 n = normal;
vec3 l = normalize( LightDirection_cameraspace );
float cosTheta = max( dot( n,l ), 0);
float distance = distance(LightPosition_cameraspace, vertexPosition_cameraspace);
frag_Color = vec4((vec3(10,10,10) * cosTheta)/(distance*distance)), 1);
}
And finally, here's the current result:
So My question is if anyone please can explain the result or how I should do to get a correct result. I would also appreciate good resources on the area.
Yes, using the depth buffer to reconstruct position is your best bet. This will significantly cut down on memory bandwidth / storage requirements. Modern hardware is biased towards doing shader calculations rather than memory fetches (this was not always the case), and the instructions necessary to reconstruct position per-fragment will always finish quicker than if you were to fetch the position from a texture with adequate precision. Now, you just have to realize what the hardware depth buffer stores (understand how depth range and perspective distribution work) and you will be good to go.
I do not see any attempt at reconstruction of world/view space position from the depth buffer in the code your question lists. You are just sampling from a buffer that stores the position in view-space. Since you are not performing reconstruction in this example, the problem has to do with sampling the view-space position... can you update your question to include the internal formats of the G-Buffer textures. In particular, are you using a format that can represent negative values (this is necessary to express position, otherwise negative values are clamped to 0).
On a final note, your position is also view-space and not world-space, a trained eye can tell this immediately by the way the colors in your position buffer are black in the lower-left corner. If you want to debug your position/normal, you should bias/scale the sampled colors into the visible range:
([-1.0, 1.0] -> [0.0, 1.0]) // Vec = Vec * 0.5 + 0.5
You may need to do this when you output some of the buffers if you want to store the normal G-Buffer more efficiently (e.g. in an 8-bit fixed-point texture instead of floating-point).