Java OpenGL apply fragment shader to partially transparent texture mapped quad - java

I'm drawing a texture mapped quad with a texture that has some transparent pixels on it. So, I load the texture and then draw it
gl.glEnable(GL.GL_ALPHA_TEST);
gl.glAlphaFunc(GL.GL_GREATER,0);
gl.glBindTexture(GL.GL_TEXTURE_2D, texture);
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, width, height, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, buff);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
gl.glTexEnvi(GL.GL_TEXTURE_ENV, GL.GL_TEXTURE_ENV_MODE, GL.GL_REPLACE);
gl.glColor4i(1,1,1,1);
gl.glBegin(GL.GL_QUADS);
// draw my quad here
gl.glEnd():
When I draw this, the transparent pixels do not show as expected. However, I want to apply a fragment shader. In this case I'll test something simple:
void main() {
gl_FragColor = vec4(0.0);
if ( gl_Color.a > 0.0 ) {
gl_FragColor = vec4(1.0,0,0,1.0);
}
}
In this case, ALL of the pixels should up as red, instead of just the non-transparent ones. Can someone explain how to just color the non-transparent pixels using a shader?
thanks,
Jeff

gl_Color is the color output by the vertex shader. Unless your vertex shader did your texture fetching for you, then your vertex shader probably passes the color attribute directly to your fragment shader via gl_FrontColor.
If you're not using a vertex shader, and just using fixed-function vertex processing, then it's a certainty that the fragment shader was given only the color. Remember that fragment shaders override all per-fragment glTexEnv operations, including the fetching of the texture.
If you want to test the texture's opacity, then you need to fetch from the texture yourself. That requires using a sampler2D object and the texture2D function (assuming you're using GLSL version 1.20. If you're using later versions, you want the texture function). It's been a while since I did anything with OpenGL's built-in inputs and outputs, but the shader would look something like this:
#version 120
uniform sampler2D myTexture;
void main()
{
gl_FragColor = vec4(0.0);
vec4 texColor = texture2D(myTexture, gl_TexCoord[0]); //Assuming you're using texture coordinate 0.
if(texColor.a < 1.0)
gl_FragColor = vec4(1.0,0,0,1.0);
}
You will also have to set the program uniform's value to the texture unit you're using. You'll have to look that one up though.

Related

Can't write to GL_RGBA32UI FBO texture on OpenGL ES

I have two GL_RGBA32UI FBO textures, which I use to store current state of particle positions/velocities per texel.
The first I fill with data like this only once:
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL30.GL_RGBA32UI, width, height, 0, GL30.GL_RGBA_INTEGER, GL20.GL_UNSIGNED_INT, buffer);
Per render loop the second one is written to via a shader while the first is used as texture and the second as target. I do that by drawing a quad of from [-1, -1] to [1, 1] while the viewport is set between [0, 0] and [textureSize, textureSize]. This way, in the fragment shader I have a shader run per texel. In each run I read the first texture as input, update it and write it out to the second texture.
Then I render the second FBO's texture to the screen using a different shader and mesh, where every texel would be represented by one vertex in the mesh. This way I can extract the particle position from the texture and set gl_Position accordingly in the vertex shader.
After that I switch the first and second FBO and continue with the next render loop. This means that the two FBOs are used as a GPU based storage for render data.
This works totally fine on the desktop app and even in the Android emulator. It fails on real Android devices though: The second FBO's texture of the particular loop has always the values [0, 0, 0, 0] after update on real Android devices only. It totally works fine when just rendering the data from the first buffer, though.
Any idea?
My update shaders (take first FBO's texture and render it to the second's) are as follows.
Vertex shader:
#version 300 es
precision mediump float;
in vec2 a_vertex;
out vec2 v_texCoords;
void main()
{
v_texCoords = a_vertex / 2.0 + 0.5;
gl_Position = vec4(a_vertex, 0, 1);
}
Fragment shader:
#version 300 es
precision mediump float;
precision mediump usampler2D;
uniform usampler2D u_positionTexture;
uniform float u_delta;
in vec2 v_texCoords;
out uvec4 fragColor;
void main()
{
uvec4 position_raw = texture(u_positionTexture, v_texCoords);
vec2 position = vec2(
uintBitsToFloat(position_raw.x),
uintBitsToFloat(position_raw.y)
);
vec2 velocity = vec2(
uintBitsToFloat(position_raw.z),
uintBitsToFloat(position_raw.w)
);
// Usually I would alter position and velocity vector here and write it back
// like this:
// position += (velocity * u_delta);
//
// fragColor = uvec4(
// floatBitsToUint(position.x),
// floatBitsToUint(position.y),
// floatBitsToUint(velocity.x),
// floatBitsToUint(velocity.y));
// Even with this the output is 0 on all channels:
fragColor = uvec4(
floatBitsToUint(50.0),
floatBitsToUint(50.0),
floatBitsToUint(0.0),
floatBitsToUint(0.0));
// Even writing the input directly would not make the correct values appear in the texture pixels:
// fragColor = position_raw;
}
How I update the textures (from fbo1 to fbo2):
private void updatePositions(float delta) {
fbo2.begin();
updateShader.bind();
Gdx.gl20.glViewport(0, 0, textureSize, textureSize);
fbo1.getColorBufferTexture().bind(0);
updateShader.setUniformf("u_delta", delta);
updateShader.setUniformi("u_positionTexture", 0);
Gdx.gl20.glDisable(GL20.GL_BLEND);
Gdx.gl20.glBlendFunc(GL20.GL_ONE, GL20.GL_ZERO);
updateMesh.render(updateShader, GL20.GL_TRIANGLE_STRIP);
fbo2.end();
}
If you are reading a 32-bit per component texture you need a highp sampler and you need to store the result in a highp variable.
Currently you are specifying a mediump for usample2D and the default int precision is also mediump. For integers mediump is specified as "at least" 16-bit, so either of these may result in your 32-bit value being truncated.
Note the "at least" - it's legal for an implementation to store this at a higher precision - so you may find "it happens to work" on some implementations (like the emulator) because that implementation chooses to use a wider type.

OpenGL rendering to texture produces empty Texture

I am trying to render and orhto projection of my scenes depth values to a texture inorder to use the texture in a later render cylce to determine what fragments are in shadow. Basically a Shadow Map.
However the texture that I am rendering to ends up being uniformly empty. Given that i can only really test it in a shader i am limited to what output i can generate. However it seems that all my z values in the Texture are 0.
Here is the code that generates the Texture(Width and height are 1024 and pixelFormat is GL_DEPTH_COMPONENT):
this.id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, pixelFormat, GL_FLOAT, (ByteBuffer) null);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
return id;
Here I create the FrameBuffer and attach the Texture:
// Create a FBO to render the depth
this.depthMapFBO = glGenFramebuffers();
// Create the depth map texture
this.depthMap = new Texture(SHADOW_MAP_WIDTH, SHADOW_MAP_HEIGHT, GL_DEPTH_COMPONENT);
// Attach the the depth map texture to the FBO
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, this.depthMap.getId(), 0);
// Set only depth
glDrawBuffer(GL_NONE);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
throw new Exception("Could not create FrameBuffer" +glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
// Unbind
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Before I render my Scene I call this function to render the depth to the texture:
if(shaderMap.containsKey("shadow")){
shaderprogram = shaderMap.get("shadow");
}
shaderprogram.bind();
Sun sun = resourceManager.getSun();
Matrix4f LightViewMatrix = transformation.getLightViewMatrix(sun);
Matrix4f modelLightViewMatrix = transformation.getModelViewMatrix(object, LightViewMatrix);
shaderprogram.setUniform("modelLightViewMatrix",modelLightViewMatrix);
glBindFramebuffer(GL_FRAMEBUFFER,this.shadowmap.getDepthMapFBO());
glViewport(0, 0, 1024, 1024);
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
this.shadowmap.getDepthMapTexture().bind();
glPolygonMode( GL_FRONT_AND_BACK, GL_FILL );
glBindVertexArray(object.getMesh().getVaoId());
glEnableVertexAttribArray(0);//Vertex positions
glEnableVertexAttribArray(1);//Color Positions
glEnableVertexAttribArray(2);//Normals
glDrawElements(GL_TRIANGLES, object.getMesh().getVertexcount(),GL_UNSIGNED_INT ,0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
shaderprogram.unbind();
I can post the Matrices for OrthogonalViewMatrix and LightViewMatrix if needed but i did test them and rendered my scene with them and it gives the desired effect of the Camera flying over the Terrain and looking at the center of the map. Basically how you would imagine the scene to look like if the camera was the sun. So I dont think there is anything wrong with them.
This is my second render with normal projections. Basically the normal Camera:
shaderprogram.createUniform("shadowMap");
glActiveTexture(GL_TEXTURE4);
this.shadowmap.getDepthMapTexture().bind();
shaderprogram.setUniform("shadowMap", 4);
glPolygonMode( GL_FRONT_AND_BACK, GL_FILL );
glBindVertexArray(object.getMesh().getVaoId());
glEnableVertexAttribArray(0);//Vertex positions
glEnableVertexAttribArray(1);//Color Positions
glEnableVertexAttribArray(2);//Normals
glDrawElements(GL_TRIANGLES, object.getMesh().getVertexcount(),GL_UNSIGNED_INT ,0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D,0);
shaderprogram.unbind();
Some parts are left out but I think those are the most important Code parts where the Error might be.
Here is the vertex and the fragment shader that is used in the first render cycle for the shadowmap:
#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec2 texCoord;
layout (location=2) in vec3 vertexNormal;
uniform mat4 modelLightViewMatrix;
uniform mat4 orthoProjectionMatrix;
void main()
{
gl_Position = orthoProjectionMatrix * modelLightViewMatrix * vec4(position, 1.0f);
}
I know i am not using the the texCoords and the vertexNormal.
Here the fragment shader:
#version 330
void main()
{
gl_FragDepth = gl_FragCoord.z;
}
It should just save the Fragments Depth value.
And here the part of the normal scenes fragment shader:
float shadowfactor = 0;
vec3 projCoords = mlightviewVertexPos.xyz;
projCoords = projCoords * 0.5 + 0.5;
if (projCoords.z < texture(shadowMap,projCoords.xy).r){
// Current fragment is not in shade
shadowfactor = 1;
}else{
shadowfactor = 0.5;
}
color = color * (vec4(1,1,1,1)* shadowfactor);
fragColor = color;
Im inputing the orthoMatrix and the LightViewMatrix to determine where the fragment would be in the Suns POV and checking the Z Value in that part of the Texture.
The Problem is that shadowfactor seems to be a uniformly black texture. I tried assigning the texture(shadowMap,projCoords.xy).r directly to the fragment to see if there are any differences anywhere but it is all the same black color eg. 0.
I also tried to use the ShadowMap texture directly on the terrain to see if there is anything on there but I also only get a black Texture.
I am aware that this is a very long question but I tried debugging it for the last 2 days and cant find the error. My guess is that I m either not binding the Texture right or that the wrong FrameBuffer is used in the render cycle.
Hopefully someone wants to help and can find the Error.
Thank you for your time in advance,
Alex

How to obtain the RGB colors of the camera texture in OpenGL ES 2.0 for Android in Java

I'm trying to port a .NET app to Android where I capture each frame from the camera and then modify it accordingly to user settings before displaying it. Doing it in .NET was simple since I was able to simply query the camera for the next image and I would get a bitmap that I could access at will.
One of the many processing options requires the application to obtain the intensity histogram of each captured image and then do some modifications to the captured image before displaying the result (based on user settings). What I'm attempting to do is to capture and modify the camera preview in Android.
I understand that the "best" way to achieve some sort of real time-ish camera processing is by using OpenGL as the preview framework by using a GLES11Ext.GL_TEXTURE_EXTERNAL_OES texture.
I am able to capture the preview and do some processing in my fragment shader like turning the image gray scale, modifying the colors of the fragment, threshold clipping, etc., but to do stronger processing like computing histogram, or applying Fast Fourier Transform, I need (fast) access (read/write) to all the pixels in RGB format in the captured image contained in the texture before displaying it.
I'm using Java with OpenGL ES 2.0 for Android.
My current code for drawing does the following:
private int mTexture; // texture handle created to hold the captured image
...
// Called for every captured frame
public void onDraw()
{
int mPositionHandle;
int mTextureCoordHandle;
GLES20.glUseProgram(mProgram);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, mTexture);
// TODO:
// Obtain RGB pixels of the texture and manipulate here.
// TODO:
// Put the resulting RGB pixels in a texture for display.
// prepare for drawing
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "position");
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false,vertexStride, vertexBuffer);
mTextureCoordHandle = GLES20.glGetAttribLocation(mProgram, "inputTextureCoordinate");
GLES20.glEnableVertexAttribArray(mTextureCoordHandle);
GLES20.glVertexAttribPointer(mTextureCoordHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false,vertexStride, textureVerticesBuffer);
// draw the texture
GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
}
My vertex and fragment shaders are very simple:
Vertex shader:
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate;
}
Fragment shader (accesses the captured image directly):
/* Shader: Gray Scale*/
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 textureCoordinate;
uniform samplerExternalOES s_texture;
void main()
{
float clr = dot(texture2D(s_texture, textureCoordinate), vec4(0.299, 0.587, 0.114, 0.0));
gl_FragColor = vec4(clr, clr, clr, 1.0);
}
It would be ideal if I were able to obtain the width and height of the captured texture and be able to obtain and modify (or be able to write into another texture) the RGB value for every pixel in the capture, such as in an array of bytes where each byte represented a color channel, for processing before displaying.
I am starting to learn OpenGL ES and I got this project on the way. Any help is deeply appreciated, thank you.

OpenGL JOGL texture outline

I'm drawing some points in OpenGL (JOGL) as follows:
BufferedImage image = loadMyTextureImage();
Texture tex = TextureIO.newTexture(image, false);
tex.setTexParameteri(GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
tex.setTexParameteri(GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
tex.bind();
gl.glColor4f(r,g,b,a);
gl.glBegin(GL_POINTS);
for ( int i = 0; i < numPoints; i++ ) {
// compute x,y,z
gl.glVertex3f(x,y,z);
}
gl.glEnd();
My image is a white image, so I can reuse that same texture and just color it using gl.glColor4f, but I would like to draw an outline around it in a different color. Is there a way to do that?
If you're using the texture to determine the shape of the point, then the obvious way to do the outline would be to add a second texture to draw the outline of the point on top.
The outline texture would also be white, so you could colour it to any colour you like in the same way.
Depending on the alpha-blending mode you use, this can also be used to give a "glowing" edge effect.

OpenGL ES: Drawing using a Texture Atlas

I'm trying to develop an Android 2D game using OpenGL ES that uses a tiled map, and I heard that it's best to store the tiles in a texture atlas (one large bitmap with multiple tiles) for performance reasons.
Does anyone have some sample code that demonstrates how to draw a tile from a texture atlas in Android OpenGL ES?
onDrawFrame(GL10 gl) {
...
}
Well I figured out how to do it.
onDrawFrame(GL10 gl) {
...
int[] crop = new int[4];
crop[0] = tileWidth * tileIndex; // tileIndex represents the nth tile in the texture atlas
crop[1] = tileHeight;
crop[2] = tileWidth;
crop[3] = -tileHeight;
// specify the source rectangle
((GL11) gl).glTexParameteriv(GL10.GL_TEXTURE_2D, GL11Ext.GL_TEXTURE_CROP_RECT_OES, crop, 0);
// draw the texture
((GL11Ext)gl).glDrawTexiOES(x, y, 0, tileWidth, tileHeight);
...
}

Categories

Resources