Can't write to GL_RGBA32UI FBO texture on OpenGL ES - java

I have two GL_RGBA32UI FBO textures, which I use to store current state of particle positions/velocities per texel.
The first I fill with data like this only once:
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL30.GL_RGBA32UI, width, height, 0, GL30.GL_RGBA_INTEGER, GL20.GL_UNSIGNED_INT, buffer);
Per render loop the second one is written to via a shader while the first is used as texture and the second as target. I do that by drawing a quad of from [-1, -1] to [1, 1] while the viewport is set between [0, 0] and [textureSize, textureSize]. This way, in the fragment shader I have a shader run per texel. In each run I read the first texture as input, update it and write it out to the second texture.
Then I render the second FBO's texture to the screen using a different shader and mesh, where every texel would be represented by one vertex in the mesh. This way I can extract the particle position from the texture and set gl_Position accordingly in the vertex shader.
After that I switch the first and second FBO and continue with the next render loop. This means that the two FBOs are used as a GPU based storage for render data.
This works totally fine on the desktop app and even in the Android emulator. It fails on real Android devices though: The second FBO's texture of the particular loop has always the values [0, 0, 0, 0] after update on real Android devices only. It totally works fine when just rendering the data from the first buffer, though.
Any idea?
My update shaders (take first FBO's texture and render it to the second's) are as follows.
Vertex shader:
#version 300 es
precision mediump float;
in vec2 a_vertex;
out vec2 v_texCoords;
void main()
{
v_texCoords = a_vertex / 2.0 + 0.5;
gl_Position = vec4(a_vertex, 0, 1);
}
Fragment shader:
#version 300 es
precision mediump float;
precision mediump usampler2D;
uniform usampler2D u_positionTexture;
uniform float u_delta;
in vec2 v_texCoords;
out uvec4 fragColor;
void main()
{
uvec4 position_raw = texture(u_positionTexture, v_texCoords);
vec2 position = vec2(
uintBitsToFloat(position_raw.x),
uintBitsToFloat(position_raw.y)
);
vec2 velocity = vec2(
uintBitsToFloat(position_raw.z),
uintBitsToFloat(position_raw.w)
);
// Usually I would alter position and velocity vector here and write it back
// like this:
// position += (velocity * u_delta);
//
// fragColor = uvec4(
// floatBitsToUint(position.x),
// floatBitsToUint(position.y),
// floatBitsToUint(velocity.x),
// floatBitsToUint(velocity.y));
// Even with this the output is 0 on all channels:
fragColor = uvec4(
floatBitsToUint(50.0),
floatBitsToUint(50.0),
floatBitsToUint(0.0),
floatBitsToUint(0.0));
// Even writing the input directly would not make the correct values appear in the texture pixels:
// fragColor = position_raw;
}
How I update the textures (from fbo1 to fbo2):
private void updatePositions(float delta) {
fbo2.begin();
updateShader.bind();
Gdx.gl20.glViewport(0, 0, textureSize, textureSize);
fbo1.getColorBufferTexture().bind(0);
updateShader.setUniformf("u_delta", delta);
updateShader.setUniformi("u_positionTexture", 0);
Gdx.gl20.glDisable(GL20.GL_BLEND);
Gdx.gl20.glBlendFunc(GL20.GL_ONE, GL20.GL_ZERO);
updateMesh.render(updateShader, GL20.GL_TRIANGLE_STRIP);
fbo2.end();
}

If you are reading a 32-bit per component texture you need a highp sampler and you need to store the result in a highp variable.
Currently you are specifying a mediump for usample2D and the default int precision is also mediump. For integers mediump is specified as "at least" 16-bit, so either of these may result in your 32-bit value being truncated.
Note the "at least" - it's legal for an implementation to store this at a higher precision - so you may find "it happens to work" on some implementations (like the emulator) because that implementation chooses to use a wider type.

Related

How to draw multiple triangles each with a different transformation matrix with OpenGL ES?

As I did not find success in drawing multiple triangles with different matrices for each, for now I am stuck with transforming vertices on CPU and use a shader without matrix transformation..
Help will be greatly appreciated !
Here is my current shader :
attribute vec2 vertices;
attribute vec2 textureUvs;
varying vec2 textureUv;
void main()
{
gl_Position = vec4(vertices,0.0,1.0);
textureUv = textureUvs;
};
It works very well except that all vertices are transformed by the CPU before calling OpenGL drawArray(),I suppose that I will get better performance if I can send each triangles matrix and let OpenGL compute vertices.
And here is the draw call :
public final static void drawTexture(FloatBuffer vertices, FloatBuffer textureUvs, int textureHandle, int count, boolean triangleFan)
{
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);
GLES20.glUseProgram(progTexture);
GLES20.glEnableVertexAttribArray(progTextureVertices);
GLES20.glVertexAttribPointer(progTextureVertices, 2, GLES20.GL_FLOAT, false, 2*Float.BYTES, vertices);
GLES20.glEnableVertexAttribArray(progTextureUvs);
GLES20.glVertexAttribPointer(progTextureUvs, 2, GLES20.GL_FLOAT, false, 2*Float.BYTES, textureUvs);
if(triangleFan)
{
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4 * count); //Faster 10%
}
else
{
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6 * count);
}
GLES20.glDisableVertexAttribArray(progTextureVertices);
GLES20.glDisableVertexAttribArray(progTextureUvs);
}
Note that it is a Sprite renderer that's why I used only 2d vertices.
I finally answered my own question, and yes it is possible to draw mutiple triangles with differents matrices with OpenGLES2 and it worth it !
The answer is related to this one How to include model matrix to a VBO? and #httpdigest comment.
Basically for sprites it only reqiere two vec3 as attributes of the shader that you fill with first and second row of your matrix 3x3.
Here is the shader I am using :
attribute vec3 xTransform;
attribute vec3 yTransform;
attribute vec2 vertices;
attribute vec2 textureUvs;
varying vec2 textureUv;
void main()
{
gl_Position = vec4(dot(vec3(vertices,1.0), xTransform), dot(vec3(vertices,1.0), yTransform), 0.0,1.0) ;
textureUv = textureUvs;
}
First you get two attributes pointers :
int progTextureXTransform = GLES20.glGetAttribLocation(progTexture, "xTransform");
int progTextureYTransform = GLES20.glGetAttribLocation(progTexture, "yTransform");
And for drawing you pass one vector of each per vertex :
GLES20.glEnableVertexAttribArray(progTextureXTransform);
GLES20.glVertexAttribPointer(progTextureXTransform, 3, GLES20.GL_FLOAT, false, 3*Float.BYTES, xTransforms);
GLES20.glEnableVertexAttribArray(progTextureYTransform);
GLES20.glVertexAttribPointer(progTextureYTransform, 3, GLES20.GL_FLOAT, false, 3*Float.BYTES, yTransforms);
On a galaxy Tab 2 this is twice faster than computing vertices with CPU.
xTransform is the first row of your 3x3 matrix
yTransform is the second row of your 3x3 matrix
And of course this can be extended for 3d rendering by adding a zTransform + switch to vec4

Android OpenGL ES 2.0, Compute Normal for each triangle

I need to compute Normals for each Triangle Face (not for each vertex) using Opengl ES 2.0 in Android. But I can't pass attribute in the Fragment Shader directly.
Found one solution: Repeat vertices for each Triangle, and pass Triangle face Normal as an Attribute in Vertex Shader.
But I don't want to duplicate the vertices. I am drawing triangle using Vertex Indices.
So, a vertex is shared by more than one triangle, then how should I compute Triangle face Normals.
p.s. I am a newbie in opengl.
The easiest solution is just to duplicate vertices. The vertex shader is very rarely a bottleneck. I do not know your specific needs, however, there are cirumstances when duplicating a vertex is not a good solution. For example, if the mesh is skinned and animated that means a lot of computation happens in the vertex shader. Another case is when the mesh is animated in some weird way in the vertex shader and you have to recompute the normals. Clearly, you cannot compute per face normals in the vertex shader. You could do that in the geometry shader, but we do not have one in OpenGL ES 2.0. However, there is a simple solution - compute normals in fragment shader! So, if duplication of vertices does not work for you, here is the solution:
We will need an OpenGL extension - standard_derivatives, which is widely supported, but you will still need to check if it is supported on the device before running the code. To enable the extension, you will have to add the following line to the fragment shader before it's code:
#extension GL_OES_standard_derivatives : enable
We will need a varying variable for the vertex position in the world coordinates. It should be computed in the vertex shader and how it is done depends much on your shader. It is used for many needs, so you may already compute it in your vertex shader. So let's assume, that we have this line in the fragment shader:
varying vec3 positionWorld;
We will need a view matrix of the camera. It is possible that you are already passing one to the fragment shader. Let's assume that we have this uniform in the fragment shader:
uniform mat4 viewMatrix;
Now, we are going to compute the normal. First, we compute a normal in view space and then transform into the worldspace. To compute normal in the viewspace, we use derivative functions:
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
Here a derivative of position is taken with respect to x and y coordinate in screen space. That means that we have two vectors that are in lay in the surface plane. To get normal to the surface we do a cross product. Sure, the result is not a unit vector, so we need also to normalize it.
The last step is to compute normal in the worldspace. View matrix applies a transformation from world space to view space. One could think that we need to compute the inverse of it since we need to go from view space to world space, but because view matrix is orthonormal, the transpose of that matrix is also its inverse, so the code will be:
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
To make life easier, we can wrap everything into a function:
vec3 ReconstructNormal(vec3 positionWorldSpace)
{
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
return normalWorldSpace;
}
Now we have a reconstructed normal in world space. Below, just a simple example, why this can be very useful. Note, that since it uses WebGL, it is also pretty much OpenGL ES 2.0 compatible.
var container;
var camera, scene, renderer;
var mesh;
var uniforms;
var clock = new THREE.Clock();
init();
animate();
function init() {
container = document.getElementById('container');
camera = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 0.1, 100);
camera.position.z = 0.6;
camera.position.y = 0.2;
camera.rotation.x = -0.45;
scene = new THREE.Scene();
var boxGeometry = new THREE.PlaneGeometry(0.75, 0.75, 32, 32);
var heightMap = THREE.ImageUtils.loadTexture("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAFiUAABYlAUlSJPAAAAeeSURBVFhHTdfFq1VfFAfwfbzP7q5ndwcm2IqKgQoiOhQFQXAgDpxecObAP0ARdGJMdGDhwAILO7C7uzvv73wWHPltuLx7z9l7rfWNtc552ZQpUypt27ZN7dq1S/9f7969S1evXk116tRJtWrVSj9//kzfv39PX79+jfvdu3dPpVIp3b9/P3Xo0CH1798/tWrVKjVs2DBt3749vX//PnXr1i3VqFEj9jVq1Ci9efMm3blzJ7Vp0yY1adIk/fr1K2Xz5s2r2CjRy5cv42/r1q3Tnz9/0smTJ6MQh+vVq5du3LgRwSSsrq5OnTp1Sq9fv05fvnyJ38OHD0+7du1K+/bti8Jq166dfvz4EQU7v2TJkoh35cqVADN37txU1bJly0hm+VskaNasWXr+/HkEUe2AAQNSnz590ufPn1OLFi3Sx48f0+3btyO5fUePHo2CJ0yYkJo2bRpgqqqqIq4lzsaNG/8xaCkqGJDYjbdv36aaNWtGkKlTpwadEBTyPHz4MB0/fjw9efIknTp1Kn5bM2fODMqdlUjiFy9epAYNGsR9zCnYck0u+6KA5cuXVz59+hQXX716FbTWrVs3jR8/PnTt169f2rJlS9q2bVsEcNCi56NHj2IvyTAE+ePHj8Mv1u/fv0PSW7duhYwF01h2RsHZ4sWLw4SoVZ1No0ePjorpK8izZ8/+oejRo0dq37590P306dPYL6nCly5dGqyRD5uNGzf+J5Gk58+fj+Lu3bsXHsBsKa+6rHLJaevQgQMHIgGKbt68GYV8+/YtdBVM5c2bN093794NFnSE89euXYvvf//+Tbxlr2u6yHfxFaczLl26FEbNunTpUhk8eHAaMmRIOJSW6NUZUEvKB5IVa+DAgWncuHERvH79+oHGXmcZFgs+FpYktQ8b2OzVq1dI53wp17n84cOH0B/1c+bMiWIk6dixYxSjvzEjETTMh2IsYO/BgweRHAuoJomCJVq0aFHq2bNn9P/FixdDUrIxKtlKq1atKgtq8+HDhyOhBJs3b46gDEozruYBByWlJbSuW8ypKEEZbMyYMWnatGnxe/bs2SGj8zoIEKDFz9atW1dRoemltVRIQ0msQYMGxV9ONrlobpLpDguNCu7du3cwggVzBOUkVfzIkSMDGIDXr1+PYu0HvJQjL2sRSFUkuRvah0sVhTKGLFAzUWEwaEzBYujwjFiuQ01rPoAYesAUpJMUm61evboisCQuQLhnz55/wwJtClQxYxWTU3KLsxXgnsS6BpBiovouJqOPGjUq/GI/QwIYg0ilKmLCNWvWRKUYUIDrqtdeJqT29JAih6SYcV4y5yR2DgAsYKpz586xjyx8Ip4zfme5xhUobUIvhJI5LKilUnMAU5s2bUpHjhyJBJZkffv2TQsXLgxvSEhv0ulzC1rJ0C85mbW2XKU8cZluWocEivAdza6jmpkEKbrCb0uRCrAUp6hZs2bF044xtZ72dj3LstjP5L67RopS/oQr63HjGE2VSiUOqVYxKNVqdPRhKMbq2rVrJMUMUyoacmZ0HxMSug89YyteLuPcHvKGCaGlH00dKGZ10QkGEjQWlhiPhgzlGWLSKdp+Le2ae5gS234GFBsDpijmgCvllZXR5YJhxFyQCwYxrQREm6ECAZbco+HYsWNjJqDTnt27d8fEwwBAihfPuwPGMIExw4qpS/mXsocLFFpNVaq0UQIHtRYkPsxqL4Pmz5H4LjnjOnfs2LF/b0H+8g0wigGAAXnkwoULYdRSfrhMe4FpjWp6QsOx0Jpwng/QFAUKcOLEifjus3///kCGZtT7bi/UZCMxf/Ca/TqPjNnEiRMrKGIgzvRxoxjFWgxaBaLNfc91j9PipbUwnTZkLOMWYtrzku/YIqX9ZoHCmDJbuXJlxWFoVeqANXny5PiLdgEFGDFiRDp06FCg8c5g2Y9qUhTUkwPdEENrDRs2LJ4FJOEznUHebO3atRUDA21QM6MAPgynALoxFy/Q2D7f7ZWUhBZ2UKx4OpMBelLw06RJkyIPBrFw+vTpVMo1LqPDDQm9C+YvqoFSO27dujXt3bs3ng9aS1D7+EJxinAeEwrTFZ4dUGJAUQrUzszOI86KxV/ZggULKtB6WHC+YKqmt5dRSM+cORMIeUVy+/Uyr/CBYJiQLP9HJ15gCo8oBAOSkxEoIMQyM7INGzbEW7HFB2YCGplFN+hpKMkgqUDYcoa+06dPD13piQGv85KuX78+nhnFdFSgj9gex2YAk2b5W0s8jpkINTZLzNG+Q2mj32iGwBBCP5mgNVpNOoigO3fuXPQ4FgojAiGOLlHk0KFD4z0y/jFxiCksdKFZzxfz3JLQHkm9sDKVB5QgHL1ixYo4y9DenhQAGFnd12FAzJ8/P+IfPHgwnT17NpXyqstQ0Jx5zHSoTTeUFg8nlF++fDkQGiJ0J5l79kLpRVZg5739OiOGxM6ZJ87633HHjh0x9Er5y2PZTYZBPRSqdojZvFCqmAcgkojmPgaL/xfpumzZsvjN8SYqnwCEQei1pt9A7Ny5M4CZrlV0Qa8ukNwmfz1gFKFKukPqugQSksB0dI3RdMuMGTNCazJoNxK6rztIx/liGWgYr66uTv8BU33Si9zKcpYAAAAASUVORK5CYII=");
heightMap.wrapT = heightMap.wrapS = THREE.RepeatWrapping;
uniforms = {u_time: {type: "f", value: 0.0 }, u_heightMap: {type: "t",value:heightMap} };
var material = new THREE.ShaderMaterial({
uniforms: uniforms,
side: THREE.DoubleSide,
wireframe: false,
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragment_shader').textContent
});
mesh = new THREE.Mesh(boxGeometry, material);
mesh.rotation.x = 3.14 / 2.0;
scene.add(mesh);
renderer = new THREE.WebGLRenderer();
renderer.setClearColor( 0x000000, 1 );
container.appendChild(renderer.domElement);
onWindowResize();
window.addEventListener('resize', onWindowResize, false);
}
function onWindowResize(event) {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
var delta = clock.getDelta();
uniforms.u_time.value += delta;
//mesh.rotation.z += delta * 0.5;
renderer.render(scene, camera);
}
body { margin: 0px; overflow: hidden; }
<script src="https://threejs.org/build/three.min.js"></script>
<div id="container"></div>
<script id="fragment_shader" type="x-shader/x-fragment">
#extension GL_OES_standard_derivatives : enable
varying vec3 positionWorld; // position of vertex in world coordinates
vec3 ReconstructNormal(vec3 positionWorldSpace)
{
vec3 normalViewSpace = normalize(cross(dFdx(positionWorldSpace), dFdy(positionWorldSpace)));
vec3 normalWorldSpace = (vec4(normalViewSpace, 0.0) * viewMatrix).xyz;
return normalWorldSpace;
}
// Just some example of using a normal. Here we do a really simple shading
void main( void )
{
vec3 lightDir = normalize(vec3(1.0, 1.0, 1.0));
vec3 normal = ReconstructNormal(positionWorld);
float diffuse = max(dot(lightDir, normal), 0.0);
vec3 albedo = vec3(0.2, 0.4, 0.7);
gl_FragColor = vec4(albedo * diffuse, 1.0);
}
</script>
<script id="vertexShader" type="x-shader/x-vertex">
uniform lowp sampler2D u_heightMap;
uniform float u_time;
varying vec3 positionWorld;
// Example of vertex shader that moves vertices
void main()
{
vec3 pos = position;
vec2 offset1 = vec2(1.0, 0.5) * u_time * 0.01;
vec2 offset2 = vec2(0.5, 1.0) * u_time * 0.01;
float hight1 = texture2D(u_heightMap, uv + offset1).r * 0.02;
float hight2 = texture2D(u_heightMap, uv + offset2).r * 0.02;
pos.z += hight1 + hight2;
vec4 mvPosition = modelViewMatrix * vec4( pos, 1.0 );
positionWorld = mvPosition.xyz;
gl_Position = projectionMatrix * mvPosition;
}
</script>

Java/Processing - scale image matrix with wrapping edges

I've been wondering how to efficiently implement the following image scale procedure in Java or Processing. When scaling out, the bounds of the image wrap around the screen edges.I'd like to apply the same at runtime to my Pixels() array in Processing. (to keep this Processing agnostic - Pixels() is nothing else than a method that returns all pixels on my current screen in an array).
(Note that this example was made in MaxMsp/Jitter using the jit.rota module, which appears to use a very efficient implementation).
unscaled
zoomed out
Can anyone help me out on how to get started? I assume it must be a combination of downscaling the image and creating adjactent copies of it - but this doesn't sound very efficient to me. the above example works perfectly on videos with even the most extreme settings.
One option I can think that will be fast is using a basic fragment shader.
Luckily you've got an example pretty close to what you need that ships with Processing via File > Examples > Topics > Shaders > Infinite Tiles
I won't be able to efficiently provide a decent start to finish guide, but
there's an exhaustive PShader tutorial on the Processing website if you're starting the from scratch.
A really rough gist of what you need:
shaders are programs that run really fast and parallelised on the GPU, split into two: vertex shaders (deal with 3d geometry mainly), fragment shaders (deal with "fragments"(what's about to become pixels on screen) mainly). You'll want to play with a fragment shader
The language is called GLSL and is a bit different(fewer types, stricter, simpler syntax), but not totally alien(similar C type of declaring variables, functions, conditions, loops, etc.)
if you want to make a variable from a GLSL program accessible in Processing you prefix it with the keyword uniform
use textureWrap(REPEAT) to wrap edges
to scale the image and wrap it you'll need to scale the texture sampling coordinates:
Here's what the InfiniteTiles scroller shader looks like:
//---------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//---------------------------------------------------------
uniform float time;
uniform vec2 resolution;
uniform sampler2D tileImage;
#define TILES_COUNT_X 4.0
void main() {
vec2 pos = gl_FragCoord.xy - vec2(4.0 * time);
vec2 p = (resolution - TILES_COUNT_X * pos) / resolution.x;
vec3 col = texture2D (tileImage, p).xyz;
gl_FragColor = vec4 (col, 1.0);
}
You can simplify this a bit as you don't need to scrolling. Additionally, instead of subtracting, and multiplying(- TILES_COUNT_X * pos), you can simply multiply:
//---------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//---------------------------------------------------------
uniform float scale;
uniform vec2 resolution;
uniform sampler2D tileImage;
void main() {
vec2 pos = gl_FragCoord.xy * vec2(scale);
vec2 p = (resolution - pos) / resolution.x;
vec3 col = texture2D (tileImage, p).xyz;
gl_FragColor = vec4 (col, 1.0);
}
Notice I've repurposed the time variable to become scale, therefore the Processing code accessing this uniform variable must also change:
//-------------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//-------------------------------------------------------------
PImage tileTexture;
PShader tileShader;
void setup() {
size(640, 480, P2D);
textureWrap(REPEAT);
tileTexture = loadImage("penrose.jpg");
loadTileShader();
}
void loadTileShader() {
tileShader = loadShader("scroller.glsl");
tileShader.set("resolution", float(width), float(height));
tileShader.set("tileImage", tileTexture);
}
void draw() {
tileShader.set("scale", map(mouseX,0,width,-3.0,3.0));
shader(tileShader);
rect(0, 0, width, height);
}
Move the mouse to change scale:
Update You can play with a very similar shader here:
I effectively did come up with a solution - but will implement George's method next as the speed difference using shaders seems to be worth it!
public void scalePixels(double wRatio,double hRatio, PGraphics viewPort) {
viewPort.loadPixels();
int[] PixelsArrayNew = viewPort.pixels.clone();
double x_ratio = wRatio ;
double y_ratio = hRatio ;
double px, py ;
for (int i=0;i<viewPort.height;i++) {
for (int j=0;j<viewPort.width;j++) {
px = Math.floor(j%(wRatio*viewPort.width)/x_ratio) ;
py = Math.floor(i%(hRatio*viewPort.height)/y_ratio) ;
viewPort.pixels[(int)(i*viewPort.width)+j] = PixelsArrayNew[(int)((py*viewPort.width)+px)] ;
}
}
viewPort.updatePixels();
}

How to obtain the RGB colors of the camera texture in OpenGL ES 2.0 for Android in Java

I'm trying to port a .NET app to Android where I capture each frame from the camera and then modify it accordingly to user settings before displaying it. Doing it in .NET was simple since I was able to simply query the camera for the next image and I would get a bitmap that I could access at will.
One of the many processing options requires the application to obtain the intensity histogram of each captured image and then do some modifications to the captured image before displaying the result (based on user settings). What I'm attempting to do is to capture and modify the camera preview in Android.
I understand that the "best" way to achieve some sort of real time-ish camera processing is by using OpenGL as the preview framework by using a GLES11Ext.GL_TEXTURE_EXTERNAL_OES texture.
I am able to capture the preview and do some processing in my fragment shader like turning the image gray scale, modifying the colors of the fragment, threshold clipping, etc., but to do stronger processing like computing histogram, or applying Fast Fourier Transform, I need (fast) access (read/write) to all the pixels in RGB format in the captured image contained in the texture before displaying it.
I'm using Java with OpenGL ES 2.0 for Android.
My current code for drawing does the following:
private int mTexture; // texture handle created to hold the captured image
...
// Called for every captured frame
public void onDraw()
{
int mPositionHandle;
int mTextureCoordHandle;
GLES20.glUseProgram(mProgram);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, mTexture);
// TODO:
// Obtain RGB pixels of the texture and manipulate here.
// TODO:
// Put the resulting RGB pixels in a texture for display.
// prepare for drawing
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "position");
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false,vertexStride, vertexBuffer);
mTextureCoordHandle = GLES20.glGetAttribLocation(mProgram, "inputTextureCoordinate");
GLES20.glEnableVertexAttribArray(mTextureCoordHandle);
GLES20.glVertexAttribPointer(mTextureCoordHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false,vertexStride, textureVerticesBuffer);
// draw the texture
GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
}
My vertex and fragment shaders are very simple:
Vertex shader:
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate;
}
Fragment shader (accesses the captured image directly):
/* Shader: Gray Scale*/
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 textureCoordinate;
uniform samplerExternalOES s_texture;
void main()
{
float clr = dot(texture2D(s_texture, textureCoordinate), vec4(0.299, 0.587, 0.114, 0.0));
gl_FragColor = vec4(clr, clr, clr, 1.0);
}
It would be ideal if I were able to obtain the width and height of the captured texture and be able to obtain and modify (or be able to write into another texture) the RGB value for every pixel in the capture, such as in an array of bytes where each byte represented a color channel, for processing before displaying.
I am starting to learn OpenGL ES and I got this project on the way. Any help is deeply appreciated, thank you.

Java OpenGL apply fragment shader to partially transparent texture mapped quad

I'm drawing a texture mapped quad with a texture that has some transparent pixels on it. So, I load the texture and then draw it
gl.glEnable(GL.GL_ALPHA_TEST);
gl.glAlphaFunc(GL.GL_GREATER,0);
gl.glBindTexture(GL.GL_TEXTURE_2D, texture);
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, width, height, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, buff);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
gl.glTexEnvi(GL.GL_TEXTURE_ENV, GL.GL_TEXTURE_ENV_MODE, GL.GL_REPLACE);
gl.glColor4i(1,1,1,1);
gl.glBegin(GL.GL_QUADS);
// draw my quad here
gl.glEnd():
When I draw this, the transparent pixels do not show as expected. However, I want to apply a fragment shader. In this case I'll test something simple:
void main() {
gl_FragColor = vec4(0.0);
if ( gl_Color.a > 0.0 ) {
gl_FragColor = vec4(1.0,0,0,1.0);
}
}
In this case, ALL of the pixels should up as red, instead of just the non-transparent ones. Can someone explain how to just color the non-transparent pixels using a shader?
thanks,
Jeff
gl_Color is the color output by the vertex shader. Unless your vertex shader did your texture fetching for you, then your vertex shader probably passes the color attribute directly to your fragment shader via gl_FrontColor.
If you're not using a vertex shader, and just using fixed-function vertex processing, then it's a certainty that the fragment shader was given only the color. Remember that fragment shaders override all per-fragment glTexEnv operations, including the fetching of the texture.
If you want to test the texture's opacity, then you need to fetch from the texture yourself. That requires using a sampler2D object and the texture2D function (assuming you're using GLSL version 1.20. If you're using later versions, you want the texture function). It's been a while since I did anything with OpenGL's built-in inputs and outputs, but the shader would look something like this:
#version 120
uniform sampler2D myTexture;
void main()
{
gl_FragColor = vec4(0.0);
vec4 texColor = texture2D(myTexture, gl_TexCoord[0]); //Assuming you're using texture coordinate 0.
if(texColor.a < 1.0)
gl_FragColor = vec4(1.0,0,0,1.0);
}
You will also have to set the program uniform's value to the texture unit you're using. You'll have to look that one up though.

Categories

Resources