I need to compute Normals for each Triangle Face (not for each vertex) using Opengl ES 2.0 in Android. But I can't pass attribute in the Fragment Shader directly.
Found one solution: Repeat vertices for each Triangle, and pass Triangle face Normal as an Attribute in Vertex Shader.
But I don't want to duplicate the vertices. I am drawing triangle using Vertex Indices.
So, a vertex is shared by more than one triangle, then how should I compute Triangle face Normals.
p.s. I am a newbie in opengl.
The easiest solution is just to duplicate vertices. The vertex shader is very rarely a bottleneck. I do not know your specific needs, however, there are cirumstances when duplicating a vertex is not a good solution. For example, if the mesh is skinned and animated that means a lot of computation happens in the vertex shader. Another case is when the mesh is animated in some weird way in the vertex shader and you have to recompute the normals. Clearly, you cannot compute per face normals in the vertex shader. You could do that in the geometry shader, but we do not have one in OpenGL ES 2.0. However, there is a simple solution - compute normals in fragment shader! So, if duplication of vertices does not work for you, here is the solution:
We will need an OpenGL extension - standard_derivatives, which is widely supported, but you will still need to check if it is supported on the device before running the code. To enable the extension, you will have to add the following line to the fragment shader before it's code:
#extension GL_OES_standard_derivatives : enable
We will need a varying variable for the vertex position in the world coordinates. It should be computed in the vertex shader and how it is done depends much on your shader. It is used for many needs, so you may already compute it in your vertex shader. So let's assume, that we have this line in the fragment shader:
varying vec3 positionWorld;
We will need a view matrix of the camera. It is possible that you are already passing one to the fragment shader. Let's assume that we have this uniform in the fragment shader:
uniform mat4 viewMatrix;
Now, we are going to compute the normal. First, we compute a normal in view space and then transform into the worldspace. To compute normal in the viewspace, we use derivative functions:
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
Here a derivative of position is taken with respect to x and y coordinate in screen space. That means that we have two vectors that are in lay in the surface plane. To get normal to the surface we do a cross product. Sure, the result is not a unit vector, so we need also to normalize it.
The last step is to compute normal in the worldspace. View matrix applies a transformation from world space to view space. One could think that we need to compute the inverse of it since we need to go from view space to world space, but because view matrix is orthonormal, the transpose of that matrix is also its inverse, so the code will be:
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
To make life easier, we can wrap everything into a function:
vec3 ReconstructNormal(vec3 positionWorldSpace)
{
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
return normalWorldSpace;
}
Now we have a reconstructed normal in world space. Below, just a simple example, why this can be very useful. Note, that since it uses WebGL, it is also pretty much OpenGL ES 2.0 compatible.
var container;
var camera, scene, renderer;
var mesh;
var uniforms;
var clock = new THREE.Clock();
init();
animate();
function init() {
container = document.getElementById('container');
camera = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 0.1, 100);
camera.position.z = 0.6;
camera.position.y = 0.2;
camera.rotation.x = -0.45;
scene = new THREE.Scene();
var boxGeometry = new THREE.PlaneGeometry(0.75, 0.75, 32, 32);
var heightMap = THREE.ImageUtils.loadTexture("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAFiUAABYlAUlSJPAAAAeeSURBVFhHTdfFq1VfFAfwfbzP7q5ndwcm2IqKgQoiOhQFQXAgDpxecObAP0ARdGJMdGDhwAILO7C7uzvv73wWHPltuLx7z9l7rfWNtc552ZQpUypt27ZN7dq1S/9f7969S1evXk116tRJtWrVSj9//kzfv39PX79+jfvdu3dPpVIp3b9/P3Xo0CH1798/tWrVKjVs2DBt3749vX//PnXr1i3VqFEj9jVq1Ci9efMm3blzJ7Vp0yY1adIk/fr1K2Xz5s2r2CjRy5cv42/r1q3Tnz9/0smTJ6MQh+vVq5du3LgRwSSsrq5OnTp1Sq9fv05fvnyJ38OHD0+7du1K+/bti8Jq166dfvz4EQU7v2TJkoh35cqVADN37txU1bJly0hm+VskaNasWXr+/HkEUe2AAQNSnz590ufPn1OLFi3Sx48f0+3btyO5fUePHo2CJ0yYkJo2bRpgqqqqIq4lzsaNG/8xaCkqGJDYjbdv36aaNWtGkKlTpwadEBTyPHz4MB0/fjw9efIknTp1Kn5bM2fODMqdlUjiFy9epAYNGsR9zCnYck0u+6KA5cuXVz59+hQXX716FbTWrVs3jR8/PnTt169f2rJlS9q2bVsEcNCi56NHj2IvyTAE+ePHj8Mv1u/fv0PSW7duhYwF01h2RsHZ4sWLw4SoVZ1No0ePjorpK8izZ8/+oejRo0dq37590P306dPYL6nCly5dGqyRD5uNGzf+J5Gk58+fj+Lu3bsXHsBsKa+6rHLJaevQgQMHIgGKbt68GYV8+/YtdBVM5c2bN093794NFnSE89euXYvvf//+Tbxlr2u6yHfxFaczLl26FEbNunTpUhk8eHAaMmRIOJSW6NUZUEvKB5IVa+DAgWncuHERvH79+oHGXmcZFgs+FpYktQ8b2OzVq1dI53wp17n84cOH0B/1c+bMiWIk6dixYxSjvzEjETTMh2IsYO/BgweRHAuoJomCJVq0aFHq2bNn9P/FixdDUrIxKtlKq1atKgtq8+HDhyOhBJs3b46gDEozruYBByWlJbSuW8ypKEEZbMyYMWnatGnxe/bs2SGj8zoIEKDFz9atW1dRoemltVRIQ0msQYMGxV9ONrlobpLpDguNCu7du3cwggVzBOUkVfzIkSMDGIDXr1+PYu0HvJQjL2sRSFUkuRvah0sVhTKGLFAzUWEwaEzBYujwjFiuQ01rPoAYesAUpJMUm61evboisCQuQLhnz55/wwJtClQxYxWTU3KLsxXgnsS6BpBiovouJqOPGjUq/GI/QwIYg0ilKmLCNWvWRKUYUIDrqtdeJqT29JAih6SYcV4y5yR2DgAsYKpz586xjyx8Ip4zfme5xhUobUIvhJI5LKilUnMAU5s2bUpHjhyJBJZkffv2TQsXLgxvSEhv0ulzC1rJ0C85mbW2XKU8cZluWocEivAdza6jmpkEKbrCb0uRCrAUp6hZs2bF044xtZ72dj3LstjP5L67RopS/oQr63HjGE2VSiUOqVYxKNVqdPRhKMbq2rVrJMUMUyoacmZ0HxMSug89YyteLuPcHvKGCaGlH00dKGZ10QkGEjQWlhiPhgzlGWLSKdp+Le2ae5gS234GFBsDpijmgCvllZXR5YJhxFyQCwYxrQREm6ECAZbco+HYsWNjJqDTnt27d8fEwwBAihfPuwPGMIExw4qpS/mXsocLFFpNVaq0UQIHtRYkPsxqL4Pmz5H4LjnjOnfs2LF/b0H+8g0wigGAAXnkwoULYdRSfrhMe4FpjWp6QsOx0Jpwng/QFAUKcOLEifjus3///kCGZtT7bi/UZCMxf/Ca/TqPjNnEiRMrKGIgzvRxoxjFWgxaBaLNfc91j9PipbUwnTZkLOMWYtrzku/YIqX9ZoHCmDJbuXJlxWFoVeqANXny5PiLdgEFGDFiRDp06FCg8c5g2Y9qUhTUkwPdEENrDRs2LJ4FJOEznUHebO3atRUDA21QM6MAPgynALoxFy/Q2D7f7ZWUhBZ2UKx4OpMBelLw06RJkyIPBrFw+vTpVMo1LqPDDQm9C+YvqoFSO27dujXt3bs3ng9aS1D7+EJxinAeEwrTFZ4dUGJAUQrUzszOI86KxV/ZggULKtB6WHC+YKqmt5dRSM+cORMIeUVy+/Uyr/CBYJiQLP9HJ15gCo8oBAOSkxEoIMQyM7INGzbEW7HFB2YCGplFN+hpKMkgqUDYcoa+06dPD13piQGv85KuX78+nhnFdFSgj9gex2YAk2b5W0s8jpkINTZLzNG+Q2mj32iGwBBCP5mgNVpNOoigO3fuXPQ4FgojAiGOLlHk0KFD4z0y/jFxiCksdKFZzxfz3JLQHkm9sDKVB5QgHL1ixYo4y9DenhQAGFnd12FAzJ8/P+IfPHgwnT17NpXyqstQ0Jx5zHSoTTeUFg8nlF++fDkQGiJ0J5l79kLpRVZg5739OiOGxM6ZJ87633HHjh0x9Er5y2PZTYZBPRSqdojZvFCqmAcgkojmPgaL/xfpumzZsvjN8SYqnwCEQei1pt9A7Ny5M4CZrlV0Qa8ukNwmfz1gFKFKukPqugQSksB0dI3RdMuMGTNCazJoNxK6rztIx/liGWgYr66uTv8BU33Si9zKcpYAAAAASUVORK5CYII=");
heightMap.wrapT = heightMap.wrapS = THREE.RepeatWrapping;
uniforms = {u_time: {type: "f", value: 0.0 }, u_heightMap: {type: "t",value:heightMap} };
var material = new THREE.ShaderMaterial({
uniforms: uniforms,
side: THREE.DoubleSide,
wireframe: false,
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragment_shader').textContent
});
mesh = new THREE.Mesh(boxGeometry, material);
mesh.rotation.x = 3.14 / 2.0;
scene.add(mesh);
renderer = new THREE.WebGLRenderer();
renderer.setClearColor( 0x000000, 1 );
container.appendChild(renderer.domElement);
onWindowResize();
window.addEventListener('resize', onWindowResize, false);
}
function onWindowResize(event) {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
var delta = clock.getDelta();
uniforms.u_time.value += delta;
//mesh.rotation.z += delta * 0.5;
renderer.render(scene, camera);
}
body { margin: 0px; overflow: hidden; }
<script src="https://threejs.org/build/three.min.js"></script>
<div id="container"></div>
<script id="fragment_shader" type="x-shader/x-fragment">
#extension GL_OES_standard_derivatives : enable
varying vec3 positionWorld; // position of vertex in world coordinates
vec3 ReconstructNormal(vec3 positionWorldSpace)
{
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
return normalWorldSpace;
}
// Just some example of using a normal. Here we do a really simple shading
void main( void )
{
vec3 lightDir = normalize(vec3(1.0, 1.0, 1.0));
vec3 normal = ReconstructNormal(positionWorld);
float diffuse = max(dot(lightDir, normal), 0.0);
vec3 albedo = vec3(0.2, 0.4, 0.7);
gl_FragColor = vec4(albedo * diffuse, 1.0);
}
</script>
<script id="vertexShader" type="x-shader/x-vertex">
uniform lowp sampler2D u_heightMap;
uniform float u_time;
varying vec3 positionWorld;
// Example of vertex shader that moves vertices
void main()
{
vec3 pos = position;
vec2 offset1 = vec2(1.0, 0.5) * u_time * 0.01;
vec2 offset2 = vec2(0.5, 1.0) * u_time * 0.01;
float hight1 = texture2D(u_heightMap, uv + offset1).r * 0.02;
float hight2 = texture2D(u_heightMap, uv + offset2).r * 0.02;
pos.z += hight1 + hight2;
vec4 mvPosition = modelViewMatrix * vec4( pos, 1.0 );
positionWorld = mvPosition.xyz;
gl_Position = projectionMatrix * mvPosition;
}
</script>
Related
The 3D application has a static camera:
float eyeX = 0.0f;
float eyeY = 0.0f;
float eyeZ = 0.0f;
Matrix.setLookAtM(viewMatrix, 0, eyeX, eyeY, eyeZ,
0f, 0f, -4f, 0f, 1.0f, 0.0f)
Then this vector is used for the eye coordinates in the shader?:
const vec4 eyePos = vec4(0.0, 0.0, 0.0, 0.0);
Or need additional transformations?
Note: Read the post What exactly are eye space coordinates? but I still have doubts because my fog shader is not working. In this shader, the distance from the observer to the object is calculated:
uniform mat4 u_vMatrix;
in vec4 a_position;
out float v_eyeDist;
const vec4 eyePos = vec4(0.0, 0.0, 0.0, 0.0);
...
void main() {
...
vec4 viewPos = u_vMatrix * a_position;
v_eyeDist = sqrt(pow((viewPos.x - eyePos.x), 2.0) +
pow((viewPos.y - eyePos.y), 2.0) +
pow((viewPos.z - eyePos.z), 2.0));
...
}
Thanks in advance!
Solution: On the advice Rabbid76, I used length() function, as well as a model-view matrix.
uniform mat4 u_mvMatrix; // model-view matrix
in vec4 a_position;
out float v_eyeDist;
...
void main() {
...
vec4 viewPos = u_mvMatrix * a_position;
v_eyeDist = length(viewPos);
...
}
The view matrix transforms from world space to view space. The view space is the local system which is defined by the point of view onto the scene. The position of the view, the line of sight and the upwards direction of the view, define a coordinate system relative to the world coordinate system.
The origin of the view space is the "eye" position, thus in view space the "eye" position is at (0, 0, 0).
In glsl the distance of to points can be computed by the built-in function distance. It is sufficient to compute the Euclidean distance of the componets x, y, z (Cartesian coordinates), since the w components (Homogeneous coordinates) is 1 for both vectors. e.g.:
v_eyeDist = distance(viewPos.xyz, eyePos.xyz);
Since the the point of view (the position of the camera) is (0, 0, 0) in viespace, it is sufficient to compute the length of the view vector, to compute the distance. The distance of a point to the origin of the coordinate system is the length of the vector to the point. In glsl this can be computed by the built-in function length. In this case it is important, to compute the length of the components x, y, z, and to exclude the w component. Including the w component would lead to wrong results:
v_eyeDist = length(viewPos.xyz);
I've been wondering how to efficiently implement the following image scale procedure in Java or Processing. When scaling out, the bounds of the image wrap around the screen edges.I'd like to apply the same at runtime to my Pixels() array in Processing. (to keep this Processing agnostic - Pixels() is nothing else than a method that returns all pixels on my current screen in an array).
(Note that this example was made in MaxMsp/Jitter using the jit.rota module, which appears to use a very efficient implementation).
unscaled
zoomed out
Can anyone help me out on how to get started? I assume it must be a combination of downscaling the image and creating adjactent copies of it - but this doesn't sound very efficient to me. the above example works perfectly on videos with even the most extreme settings.
One option I can think that will be fast is using a basic fragment shader.
Luckily you've got an example pretty close to what you need that ships with Processing via File > Examples > Topics > Shaders > Infinite Tiles
I won't be able to efficiently provide a decent start to finish guide, but
there's an exhaustive PShader tutorial on the Processing website if you're starting the from scratch.
A really rough gist of what you need:
shaders are programs that run really fast and parallelised on the GPU, split into two: vertex shaders (deal with 3d geometry mainly), fragment shaders (deal with "fragments"(what's about to become pixels on screen) mainly). You'll want to play with a fragment shader
The language is called GLSL and is a bit different(fewer types, stricter, simpler syntax), but not totally alien(similar C type of declaring variables, functions, conditions, loops, etc.)
if you want to make a variable from a GLSL program accessible in Processing you prefix it with the keyword uniform
use textureWrap(REPEAT) to wrap edges
to scale the image and wrap it you'll need to scale the texture sampling coordinates:
Here's what the InfiniteTiles scroller shader looks like:
//---------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//---------------------------------------------------------
uniform float time;
uniform vec2 resolution;
uniform sampler2D tileImage;
#define TILES_COUNT_X 4.0
void main() {
vec2 pos = gl_FragCoord.xy - vec2(4.0 * time);
vec2 p = (resolution - TILES_COUNT_X * pos) / resolution.x;
vec3 col = texture2D (tileImage, p).xyz;
gl_FragColor = vec4 (col, 1.0);
}
You can simplify this a bit as you don't need to scrolling. Additionally, instead of subtracting, and multiplying(- TILES_COUNT_X * pos), you can simply multiply:
//---------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//---------------------------------------------------------
uniform float scale;
uniform vec2 resolution;
uniform sampler2D tileImage;
void main() {
vec2 pos = gl_FragCoord.xy * vec2(scale);
vec2 p = (resolution - pos) / resolution.x;
vec3 col = texture2D (tileImage, p).xyz;
gl_FragColor = vec4 (col, 1.0);
}
Notice I've repurposed the time variable to become scale, therefore the Processing code accessing this uniform variable must also change:
//-------------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//-------------------------------------------------------------
PImage tileTexture;
PShader tileShader;
void setup() {
size(640, 480, P2D);
textureWrap(REPEAT);
tileTexture = loadImage("penrose.jpg");
loadTileShader();
}
void loadTileShader() {
tileShader = loadShader("scroller.glsl");
tileShader.set("resolution", float(width), float(height));
tileShader.set("tileImage", tileTexture);
}
void draw() {
tileShader.set("scale", map(mouseX,0,width,-3.0,3.0));
shader(tileShader);
rect(0, 0, width, height);
}
Move the mouse to change scale:
Update You can play with a very similar shader here:
I effectively did come up with a solution - but will implement George's method next as the speed difference using shaders seems to be worth it!
public void scalePixels(double wRatio,double hRatio, PGraphics viewPort) {
viewPort.loadPixels();
int[] PixelsArrayNew = viewPort.pixels.clone();
double x_ratio = wRatio ;
double y_ratio = hRatio ;
double px, py ;
for (int i=0;i<viewPort.height;i++) {
for (int j=0;j<viewPort.width;j++) {
px = Math.floor(j%(wRatio*viewPort.width)/x_ratio) ;
py = Math.floor(i%(hRatio*viewPort.height)/y_ratio) ;
viewPort.pixels[(int)(i*viewPort.width)+j] = PixelsArrayNew[(int)((py*viewPort.width)+px)] ;
}
}
viewPort.updatePixels();
}
I'm using OpenGL (4.5 core, with LWJGL 3.0.0 build 90) and I noticed some artifacts on textures using GL_REPEAT wrap mode with a high amount of repetitions:
What causes this, and how can I fix it (if I can) ?
Here, the plane's size is 100x100, and the UVs are 10000x10000. This screenshot is really really close to it (from farther, the texture is so small that we just see a flat gray color), the near plane is at 0.0001 and the far plane at 10.
I'm not sure if the problem is in the Depth Buffer since the OpenGL default one has a really high precision at closer distances.
(EDIT: I'm thinking of a floating point error on texture coordinates, but I'm not sure)
Here's my shader (I'm using deferred rendering, and the texture sampling is in the geometry pass, so I give the geometry pass shader only).
Vertex shader:
#version 450 core
uniform mat4 projViewModel;
uniform mat4 viewModel;
uniform mat3 normalView;
in vec3 normal_model;
in vec3 position_model;
in vec2 uv;
in vec2 uv2;
out vec3 pass_position_view;
out vec3 pass_normal_view;
out vec2 pass_uv;
out vec2 pass_uv2;
void main(){
pass_position_view = (viewModel * vec4(position_model, 1.0)).xyz;
pass_normal_view = normalView * normal_model;
pass_uv = uv;
pass_uv2 = uv2;
gl_Position = projViewModel * vec4(position_model, 1.0);
}
Fragment shader:
#version 450 core
struct Material {
sampler2D diffuseTexture;
sampler2D specularTexture;
vec3 diffuseColor;
float uvScaling;
float shininess;
float specularIntensity;
bool hasDiffuseTexture;
bool hasSpecularTexture;
bool faceSideNormalCorrection;
};
uniform Material material;
in vec3 pass_position_view;
in vec3 pass_normal_view;
in vec2 pass_uv;
in vec2 pass_uv2;
layout(location = 0) out vec4 out_diffuse;
layout(location = 1) out vec4 out_position;
layout(location = 2) out vec4 out_normal;
void main(){
vec4 diffuseTextureColor = vec4(1.0);
if(material.hasDiffuseTexture){
diffuseTextureColor = texture(material.diffuseTexture, pass_uv * material.uvScaling);
}
float specularTextureIntensity = 1.0;
if(material.hasSpecularTexture){
specularTextureIntensity = texture(material.specularTexture, pass_uv * material.uvScaling).x;
}
vec3 fragNormal = pass_normal_view;
if(material.faceSideNormalCorrection && !gl_FrontFacing){
fragNormal = -fragNormal;
}
out_diffuse = vec4(diffuseTextureColor.rgb * material.diffuseColor, material.shininess);
out_position = vec4(pass_position_view, 1.0); // Must be 1.0 on the alpha -> 0.0 = sky
out_normal = vec4(fragNormal, material.specularIntensity * specularTextureIntensity);
}
Yes, I know, the eye space position is useless in the G-Buffer since you can compute it later from the depth buffer. I just did this for now but it's temporary.
Also, if anything in my shaders is deprecated or a bad practice, it would be cool if you tell me what to do instead ! Thanks !
Additionnal infos (most of them useless I think):
Camera: FOV = 70°, Ratio = 16/9, Near = 0.0001, Far = 10
OpenGL: Major = 4, Minor = 5, Profile = Core
Texture: InternalFormat = GL_RGBA, Filters = Anisotropic, Trilinear
Hardware: GPU = NVIDIA GeForce GTX 970, CPU = Interl(R) Core(TM) i7-4790K CPU # 4.00GHz, Memory = 16.00 GB RAM (15.94 GB usable), Screen = 1920 x 1080 # 144Hz
Driver: GeForce Game Ready Driver V368.69 (release: 6 July 2016)
This is most likely due to floating-point imprecisions created during rasterization (interpolation, perspective correction) and worsened by the normalization in the fragment shader to fetch the correct texels.
But this is also a problem with mipmaps : to calculate which level to use, the UV of adjacent pixels are retrieved to know if the texture is stretched or compressed on screen. Because of the imprecisions, adjacent pixels share the same UV, thus the differences between them (or the 'partial derivatives') are null. This makes the texture() function sample the lowest mipmap level on those identical pixels, thus creating discontinuities.
I have a scene containing a map generated from a heightmap. I can move this map around on the X and Y axis, where the map gets moved upwards or downwards such that the map geometry intersects the origin. There is a single light that hovers over the map, always 5 units above the height of the geometry at its x and y coordinate.
When the camera is located close to the origin of the scene, all the lighting behaves fine. The more I move away from it, however, the more triangles jump first quickly to bright white, after which they become black almost instantly. I have not been able to figure out what is causing this.
Here is an overview of the structure of the scene graph:
Container instances do not change the openGL state
The shader node activates the shader pair shown below.
The problem is also present when using the fixed function pipeline.
The scene is translated and rotated prior to rendering the scene graph
I'm fairly certain the problem lies with the construction of the scene, but I'm not entirely certain.
Here are some screenshots of the effects. First one where the light is close to the camera, and where the camera is located some distance away from the scene origin:
Note the light being shown as a sphere, marked by the red circle. Second, one where the light is away from the camera:
I am also drawing the normals for reference. The glow visible in this picture is always present, no matter where the light is located relative to the camera.
Here are my shaders. I'm fairly certain they work as they're supposed to, because they worked properly in ShaderMaker:
Vertex shader:
varying vec3 normal;
varying vec3 position;
void main( void )
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
normal = gl_NormalMatrix * gl_Normal;
position = ( gl_ModelViewMatrix * gl_Vertex ).xyz;
}
Fragment shader:
varying vec3 normal;
varying vec3 position;
vec4 lightSource(vec3 norm, vec3 view, gl_LightSourceParameters light)
{
vec3 lightVector = normalize(light.position.xyz - view);
vec3 reflection = normalize(lightVector - view.xyz);
float diffuseFactor = max(0, dot(norm, lightVector));
float specularDot = max(0, dot(norm, reflection));
float specularFactor = pow(specularDot, gl_FrontMaterial.shininess);
return
gl_FrontMaterial.ambient * light.ambient +
gl_FrontMaterial.diffuse * light.diffuse * diffuseFactor +
gl_FrontMaterial.specular * light.specular * specularFactor;
}
vec4 lighting()
{
// normal might be damaged by linear interpolation.
vec3 norm = normalize(normal);
return
gl_FrontMaterial.emission +
gl_FrontMaterial.ambient * gl_LightModel.ambient +
lightSource(norm, position, gl_LightSource[0]);
}
void main()
{
gl_FragColor = lighting();
}
The solution to the problem was that I was applying glFrustrum() on the modelView matrix. When I set the glMatrixMode to the projection matrix before calling glFrustrum instead, the scene rendered correctly.
I'm very new to OpenGL and LibGdx. I started with these tutorials but wanted to apply a phong texture. I've tried to merge a number of examples but am having issues.
I've got a sphere and spinning cube on the center of the screen. I've still got hundreds of things to work out but for the moment, I don't understand why LibGdx is reporting that my uniform material can't be found...
Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: java.lang.IllegalArgumentException: no uniform with name 'uMvpMatrix' in shader
Pixel Shader
I don't believe the Fragment shader is relevant but it's at the bottom in case.
#version 120
uniform mat4 uMvpMatrix;
varying vec3 diffuseColor;
// the diffuse Phong lighting computed in the vertex shader
varying vec3 specularColor;
// the specular Phong lighting computed in the vertex shader
varying vec4 texCoords; // the texture coordinates
void main()
{
vec3 normalDirection =
normalize(gl_NormalMatrix * gl_Normal);
vec3 viewDirection =
-normalize(vec3(gl_ModelViewMatrix * gl_Vertex));
vec3 lightDirection;
float attenuation;
if (0.0 == gl_LightSource[0].position.w)
// directional light?
{
attenuation = 1.0; // no attenuation
lightDirection =
normalize(vec3(gl_LightSource[0].position));
}
else // point light or spotlight (or other kind of light)
{
vec3 vertexToLightSource =
vec3(gl_LightSource[0].position
- gl_ModelViewMatrix * gl_Vertex);
float distance = length(vertexToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(vertexToLightSource);
if (gl_LightSource[0].spotCutoff <= 90.0) // spotlight?
{
float clampedCosine = max(0.0, dot(-lightDirection,
gl_LightSource[0].spotDirection));
if (clampedCosine < gl_LightSource[0].spotCosCutoff)
// outside of spotlight cone?
{
attenuation = 0.0;
}
else
{
attenuation = attenuation * pow(clampedCosine,
gl_LightSource[0].spotExponent);
}
}
}
vec3 ambientLighting = vec3(gl_LightModel.ambient);
// without material color!
vec3 diffuseReflection = attenuation
* vec3(gl_LightSource[0].diffuse)
* max(0.0, dot(normalDirection, lightDirection));
// without material color!
vec3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0)
// light source on the wrong side?
{
specularReflection = vec3(0.0, 0.0, 0.0);
// no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation
* vec3(gl_LightSource[0].specular)
* vec3(gl_FrontMaterial.specular)
* pow(max(0.0, dot(reflect(-lightDirection,
normalDirection), viewDirection)),
gl_FrontMaterial.shininess);
}
diffuseColor = ambientLighting + diffuseReflection;
specularColor = specularReflection;
texCoords = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Setup
shader = new ShaderProgram(vertexShader, fragmentShader);
mesh = Shapes.genCube();
mesh.getVertexAttribute(Usage.Position).alias = "a_position";
Render
...
float aspect = Gdx.graphics.getWidth() / (float) Gdx.graphics.getHeight();
projection.setToProjection(1.0f, 20.0f, 60.0f, aspect);
view.idt().trn(0, 0, -2.0f);
model.setToRotation(axis, angle);
combined.set(projection).mul(view).mul(model);
Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
shader.begin();
shader.setUniformMatrix("uMvpMatrix", combined);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Stack Trace
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:113)
Caused by: java.lang.IllegalArgumentException: no uniform with name 'uMvpMatrix' in shader
at com.badlogic.gdx.graphics.glutils.ShaderProgram.fetchUniformLocation(ShaderProgram.java:283)
at com.badlogic.gdx.graphics.glutils.ShaderProgram.setUniformMatrix(ShaderProgram.java:539)
at com.badlogic.gdx.graphics.glutils.ShaderProgram.setUniformMatrix(ShaderProgram.java:527)
at com.overshare.document.Views.Test.onRender(Test.java:150)
...
Fragment Shader
#ifdef GL_ES
precision mediump float;
#endif
precision mediump float;
varying vec4 v_Color;
void main()
{
gl_FragColor = v_Color;
}
Can someone please tell me what I'm missing?
I ran into something similar. I think because your shader doesn't use the "uMvpMatrix" uniform, it declaration gets optimized out, and so its "not there" when you go to set it. If you change your shader to reference the matrix in some way, you should get farther.
See (indirectly)
Do (Unused) GLSL uniforms/in/out Contribute to Register Pressure?
I believe there are ways of developing and compiling shaders offline, so for a complex shader it may make sense to develop it outside of Libgdx (hopefully you'd get better error messages). Libgdx is just passing the giant string on to the lower layers, it doesn't do much with the shader itself, so there shouldn't be compatibility issues.
Also, problem could be in precision specifier. On device(Nexus S) the next uniform defining will throw the same error:
uniform float yShift;
Using precision specifier solves the problem:
uniform lowp float yShift;
LibGDX allows to check shader compilation and get error log:
if(!shader.isCompiled()){
String log = shader.getLog();
}
Finally, there's flag to ignore shader errors:
ShaderProgram.pedantic = false;