Opengl ES 2.0 : Get texture size and other info - java

The context of the question is OpenGL ES 2.0 in the Android environment. I have a texture. No problem to display or use it.
Is there a method to know its width and height and other info (like internal format) simply starting from its binding id?
I need to save texture to bitmap without knowing the texture size.

Not in ES 2.0. It's actually kind of surprising that the functionality is not there. You can get the size of a renderbuffer, but not the size of a texture, which seems inconsistent.
The only thing available are the values you can get with glGetTexParameteriv(), which are the FILTER and WRAP parameters for the texture.
It's still not in ES 3.0 either. Only in ES 3.1, glGetTexLevelParameteriv() was added, which gives you access to all the values you're looking for. For example to get the width and height of the currently bound texture:
int[] texDims = new int[2];
GLES31.glGetTexLevelParameteriv(GLES31.GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, texDims, 0);
GLES31.glGetTexLevelParameteriv(GLES31.GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, texDims, 1);

As #Reto Koradi said there is no way to do it but you can store the width and height of a texture when you are loading it from android context before you bind it in OpenGL.
AssetManager am = context.getAssets();
InputStream is = null;
try {
is = am.open(name);
} catch (IOException e) {
e.printStackTrace();
}
final Bitmap bitmap = BitmapFactory.decodeStream(is);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
// here is you bind your texture in openGL

I'll suggest a hack for doing this. Use ESSL's textureSize function. To access its result from the CPU side you're going to have to pass the texture as an uniform to a shader, and output the texture size as the r & g components of your shader output. Apply this shader to an 1x1px primitive drawn to a 1x1px FBO, then readback the drawn value from the GPU with glReadPixels.
You'll have to be careful with rounding, clamping and FBO formats. You may need a 16-bit integer FBO format.

Related

Changing fading radius for vignette correction (OpenCV Java)

I'm working on simple vignette correction using OpenCV (v4.1) for Java.
The idea was to create a fading circle (from black to white) and add the value to the Brightness channel of my image. This already works, however I'd like the area/span width of the fading circle to be greater so that the transition isn't as obvious in my final image but more smooth instead
(see Snapshot below).
I created the vignette template using the getGaussianKernel method but I believe I cannot modify much here. I can change the sigma value, but that only changes the size of the circle. Is there another, more suitable method? Performance is pretty important since I have to perform this operation on many images.
Here my current approach:
public void Vignette(Mat img) {
System.loadLibrary( Core.NATIVE_LIBRARY_NAME ); // initializes openCV library
Mat column = new Mat();
Mat row = new Mat ();
Mat product = new Mat ();
Mat finalImage = new Mat ();
int sigma = 240; // vignette aperture
// creating vignette template
column = Imgproc.getGaussianKernel(img.cols(), sigma, org.opencv.core.CvType.CV_32F);
row = Imgproc.getGaussianKernel(img.rows(), sigma, org.opencv.core.CvType.CV_32F);
Core.gemm(row,column.t(),1,new Mat(),0, product); // generalized matrix multiplication for column x row matrix
Core.normalize(product, product, 255, 0, Core.NORM_MINMAX); // scaling values to [0...255]
product.convertTo(product, org.opencv.core.CvType.CV_8UC3, 255); // create 3 channel matrix
Core.bitwise_not(product, product); // invert vignette template
Imgproc.cvtColor(img, img, Imgproc.COLOR_BGR2HSV); // convert image from BGR to HSV
Vector <Mat> channels = new Vector(3);
Core.split(img, channels); // split HSV channels
Core.add(channels.get(2), product, channels.get(2)); // add value from product matrix to corresponding value of Brightness channel
Core.merge(channels, img); // merge HSV channels back together
Imgproc.cvtColor(img,img,Imgproc.COLOR_HSV2BGR); // convert image back to RGB
finalImage = img; // shows image with vignette correction
// finalImage = product; // shows vignette template
}
Snapshot (Vignette template, 'fading width' marked red):
I'm not an expert in OpenCV, but I worked with it and I usually used Gaussian blur in cases like this. It might not be the cleanest way of doing it, but it usually gets the job done very well.

(LWJGL3) OpenGL 2D Texture Array stays empty after uploading image data with glTexSubImage3D

So I'm currently trying to replace my old texture atlas stitcher with a 2D texture array to make life simpler with anisotropic filtering and greedy meshing later on.
I'm loading the png files with stb and I know that the buffers are filled properly because if I export every single layer of the soon to be atlas right before uploading it it's the correct png file.
My setup works like this:
I'm loading every single texture in my jar file with stb and create an object with it that stores the width, height, layer and pixelData in it.
When every texture is loaded i look for the biggest texture and scale every smaller texture to the same size as the biggest because i know that 2D texture arrays only work if every single one of the layers has the same size.
Then I initialize the 2d texture array like this:
public void init(int layerCount, boolean supportsAlpha, int textureSize) {
this.textureId = glGenTextures();
this.maxLayer = layerCount;
int internalFormat = supportsAlpha ? GL_RGBA8 : GL_RGB8;
this.format = supportsAlpha ? GL_RGBA : GL_RGB;
glBindTexture(GL_TEXTURE_2D_ARRAY, this.textureId);
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, internalFormat, textureSize, textureSize, layerCount, 0, this.format, GL_UNSIGNED_BYTE, 0);
}
After that i go through my map of textureLayer objects and upload every single one of them like this:
public void upload(ITextureLayer textureLayer) {
if (textureLayer.getLayer() >= this.maxLayer) {
LOGGER.error("Tried uploading a texture with a too big layer.");
return;
} else if (this.textureId == 0) {
LOGGER.error("Tried uploading texture layer to uninitialized texture array.");
return;
}
glBindTexture(GL_TEXTURE_2D_ARRAY, this.textureId);
// Tell openGL how to unpack the RGBA bytes
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Tell openGL to not blur the texture when it is stretched
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Upload the texture data
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, textureLayer.getLayer(), textureLayer.getWidth(), textureLayer.getHeight(), 0, this.format, GL_UNSIGNED_BYTE, textureLayer.getPixels());
int errorCode = glGetError();
if (errorCode != 0) LOGGER.error("Error while uploading texture layer {} to graphics card. {}", textureLayer.getLayer(), GLHelper.errorToString(errorCode));
}
The error code for every single one of my layers is 0, so I assume that everything went well. But when I debug the game with RenderDoc I can see that on every single layer every bit is 0 and therefore it's just a transparent texture with the correct width and height.
I can't figure out what I'm doing wrong since openGL tells me everything went well. It is important to me that I only use openGL 3.3 and lower since I want the game to be playable on older PCs aswell so pre allocating memory with glTexStorage3D is not an option.
The 8th paramter of glTexSubImage3D should be 1 (depth).
Note, the size of the layer is textureLayer.getWidth(), textureLayer.getHeight(), 1:
glTexSubImage3D(
GL_TEXTURE_2D_ARRAY, 0, 0, 0, textureLayer.getLayer(),
textureLayer.getWidth(), textureLayer.getHeight(), 1, // depth is 1
this.format, GL_UNSIGNED_BYTE, textureLayer.getPixels());
It is not an error to pass a width, height or depth of 0 to glTexSubImage3D, but it won't have any effect to the texture objects data store.

Why do the textured image colors are not the same as the origin?

I'm using LWJGL (OpenGL for Java) library for texture mapping.
Here is the code to read Image from file:
BufferedImage image = ImageIO.read(new File(url));
The code for getting Data Raster (Pixels of image) as byte array:
DataBufferByte imageByteBuffer = ((DataBufferByte)image.getRaster().getDataBuffer());
byte[] bytePixels = imageByteBuffer.getData();
Now the code of creating and putting the "bytePixels" array in byte buffer:
pixels = BufferUtils.createByteBuffer(bytePixels.length);
pixels.put(bytePixels);
pixels.flip();
Here for binding all of that to buffer:
id = GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, id);
GL11.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, image.getWidth(), image.getHeight(), 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, pixels);
The problem is, the colors of image textured aren't as the original image colors!!
Original Picture:
Textured image:
This answer to OpenGL Renders texture with different color than original image?, can't solve this issue, because GL_BGR is not valid in lwjgl Class GL11!
The issue is that the red and blue color channel are swapped.
At OpenGL there is the possibility, to use the GL_BGR format, which specifies a internal format where the color channels are swapped (in compare to GL_RGB).
See OpenGL 4 Refpages - glTexImage2D
and OpenGL Renders texture with different color than original image?.
At OpenGL ES you have to manually swap the red and blue color channel, because the internal format GL_BGR is missing.
See OpenGL ES 3.0 Refpages - glTexImage2D
and lwjgl - class GL11.
pixels = BufferUtils.createByteBuffer(bytePixels.length);
pixels.put(bytePixels);
pixels.flip();
for (int i = 0; i < pixels.length; i += 3) {
byte t = pixels[i];
pixels[i] = pixels[i+2];
pixels[i+2] = t;
}
Another possibility would be given in OpenGL ES 3.0 or by OpenGL extension EXT_texture_swizzle:
Since OpenGL ES 3.0 you can use the texture swizzle parameters to swap the color channels. See glTexParameter:
GL_TEXTURE_SWIZZLE_R
Sets the swizzle that will be applied to the r component of a texel before it is returned to the shader. Valid values for param are GL_RED, GL_GREEN, GL_BLUE, GL_ALPHA, GL_ZERO and GL_ONE. If GL_TEXTURE_SWIZZLE_R is GL_RED, the value for r will be taken from the first channel of the fetched texel. If GL_TEXTURE_SWIZZLE_R is GL_GREEN, the value for r will be taken from the second channel of the fetched texel. ...
This means the color channels will be swapped when the texture is looked up, by setting the following texture parameters to the texture object:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_R, GL_BLUE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
The relevant part in the specification can be found at OpenGL ES 3.0.5 Specification; 3.8.14 Texture State; page 162
To check if an OpenGL extension is valid, glGetString(GL_EXTENSIONS) can be use, which returns a space-separated list of supported extensions.
See alos android bitmap pixel format for glTexImage2D.

LibGDX: Use same TextureAtlas twice or use two different TextureAtlases

I'm migrating my native Android game to libGDX. That's why I use flipped graphics. Apparently NinePatches can't be flipped. (They are invisible or look strange.)
What would be more efficient:
use one big TextureAtlas containing all graphic files and load it twice (flipped and unflipped) or
use one big TextureAtlas for the flipped graphic files and a second small one for the NinePatch graphics?
Type A:
public static TextureAtlas atlas, atlas2;
public static void load() {
// big atlas (1024 x 1024)
atlas = new TextureAtlas(Gdx.files.internal("game.atlas"), true);
// find many AtlasRegions here
// Same TextureAtlas. Loaded into memory twice?
atlas2 = new TextureAtlas(Gdx.files.internal("game.atlas"), false);
button = ninepatch.createPatch("button");
dialog = ninepatch.createPatch("dialog");
}
Type B:
public static TextureAtlas atlas, ninepatch;
public static void load() {
// big atlas (1024 x 1024)
atlas = new TextureAtlas(Gdx.files.internal("game.atlas"), true);
// find many AtlasRegions here
// small atlas (128 x 64)
ninepatch = new TextureAtlas(Gdx.files.internal("ninepatch.atlas"), false);
button = ninepatch.createPatch("button");
dialog = ninepatch.createPatch("dialog");
}
now I do not have time to test, I expound the idea, but I think it can work, is based on a texture, not texture atlas for simplicity.
short metadata = 2;
Texture yourTextureMetadata = new Texture(Gdx.files.internal(metaDataTexture));
int width = yourTextureMetadata.getWidth() - metadata;
int height = yourTextureMetadata.getHeight() - metadata;
TextureRegion yourTextureClean = new TextureRegion(yourTextureMetadata,
1, 1, width, height);
I assume that metadata has a size of two, now I do not remember, sorry
the idea would take, the largest texture, with metadata and then cut them to have clean on the other side so that you can turn around, I hope to work.
for texturealtas would similar, findRegions and cut metadata and save them without metadata
On the other hand, note that you have static textures, and I think when android change of context, your application to another application, and then back to yours application, you can give errors visualization, your images can black display

Android Camera Preview YUV format into RGB on the GPU

I have copy pasted some code I found on stackoverflow to convert the default camera preview YUV into RGB format and then uploaded it to OpenGL for processing.
That worked fine, the issue is that most of the CPU was busy at converting the YUV images into the RGB format and it turned into the bottle neck.
I want to upload the YUV image into the GPU and then convert it into RGB in a fragment shader.
I took the same Java YUV to RGB function I found which worked on the CPU and tried to make it work on the GPU.
It turned to be quite a little nightmare, since there are several differences on doing calculations on Java and the GPU.
First, the preview image comes in byte[] in Java, but bytes are signed, so there might be negative values.
In addition, the fragment shader normally deals with [0..1] floating values for instead of a byte.
I am sure this is solveable and I almost solved it. But I spent a few hours trying to figure out what I was doing wrong and couldn't make it work.
Bottom line, I ask for someone to just write this shader function and preferably test it. For me it would be a tedious monkey job since I don't really understand why this conversion works the way it is, and I just try to mimic the same function on the GPU.
This is a very similar function to what I used on Java:
Displaying YUV Image in Android
What I did some of the job on the CPU, such as turnning the 1.5*wh bytes YUV format into a wh*YUV, as follows:
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (int) yuv420sp[yp]+127;
if ((i & 1) == 0) {
v = (int)yuv420sp[uvp++]+127;
u = (int)yuv420sp[uvp++]+127;
}
rgba[yp] = 0xFF000000+(y<<16) | (u<<8) | v;
}
}
}
I added 127 because byte is signed.
I then loaded the rgba into a OpenGL texture and tried to do the rest of the calculation on the GPU.
Any help would be appreaciated...
I used this code from wikipedia to calculate the conversion from YUV to RGB on the GPU:
private static int convertYUVtoRGB(int y, int u, int v) {
int r,g,b;
r = y + (int)1.402f*v;
g = y - (int)(0.344f*u +0.714f*v);
b = y + (int)1.772f*u;
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (b<<16) | (g<<8) | r;
}
I converted the floats to 0.0..255.0 and then use the above code.
The part on the CPU was to rearrange the original YUV pixels into a YUV matrix(also shown in wikipdia).
Basically I used the wikipedia code and did the simplest float<->byte conersions to make it work out.
Small mistakes like adding 16 to Y or not adding 128 to U and V would give undesirable results. So you need to take care of it.
But it wasn't a lot of work once I used the wikipedia code as the base.
Converting on CPU sounds easy but I believe question is how to do it on GPU?
I did it recently in my project where I needed to get very fast QR code detection even when camera angle is 45 degrees to surface where code is printed, and it worked with great performance:
(following code is trimmed just to contain key lines, it is assumed that you have both Java and OpenGLES solid understanding)
Create a GL texture that will contain stored Camera image:
int[] txt = new int[1];
GLES20.glGenTextures(1,txt,0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,txt[0]);
GLES20.glTextParameterf(... set min filter to GL_LINEAR );
GLES20.glTextParameterf(... set mag filter to GL_LINEAR );
GLES20.glTextParameteri(... set wrap_s to GL_CLAMP_TO_EDGE );
GLES20.glTextParameteri(... set wrap_t to GL_CLAMP_TO_EDGE );
Pay attention that texture type is not GL_TEXTURE_2D. This is important, since only a GL_TEXTURE_EXTERNAL_OES type is supported by SurfaceTexture object, which will be used in the next step.
Setup SurfaceTexture:
SurfaceTexture surfTex = new SurfaceTeture(txt[0]);
surfTex.setOnFrameAvailableListener(this);
Above assumes that 'this' is an object that implements 'onFrameAvailable' function.
public void onFrameAvailable(SurfaceTexture st)
{
surfTexNeedUpdate = true;
// this flag will be read in GL render pipeline
}
Setup camera:
Camera cam = Camera.open();
cam.setPreviewTexture(surfTex);
This Camera API is deprecated if you target Android 5.0, so if you are, you have to use new CameraDevice API.
In your render pipeline, have following block to check if camera has frame available, and update surface texture with it. When surface texture is updated, will fill in GL texture that is linked with it.
if( surfTexNeedUpdate )
{
surfTex.updateTexImage();
surfTexNeedUpdate = false;
}
To bind GL texture which has Camera -> SurfaceTeture link to, just do this in rendering pipe:
GLES20.glBindTexture(GLES20.GL_TEXTURE_EXTERNAL_OS, txt[0]);
Goes without saying, you need to set current active texture.
In your GL shader program which will use above texture in it's fragment part, you must have first line:
#extension GL_OES_EGL_imiage_external : require
Above is a must-have.
Texture uniform must be samplerExternalOES type:
uniform samplerExternalOES u_Texture0;
Reading pixel from it is just like from GL_TEXTURE_2D type, and UV coordinates are in same range (from 0.0 to 1.0):
vec4 px = texture2D(u_Texture0, v_UV);
Once you have your render pipeline ready to render a quad with above texture and shader, just start the camera:
cam.startPreview();
You should see quad on your GL screen with live camera feed. Now you just need to grab the image with glReadPixels:
GLES20.glReadPixels(0,0,width,height,GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bytes);
Above line assumes that your FBO is RGBA, and that bytes is already initialized byte[] array to proper size, and that width and height are size of your FBO.
And voila! You have captured RGBA pixels from camera instead of converting YUV bytes received in onPreviewFrame callback...
You can also use RGB framebuffer object and avoid alpha if you don't need it.
It is important to note that camera will call onFrameAvailable in it's own thread which is not your GL render pipeline thread, thus you should not perform any GL calls in that function.
In February 2011, Renderscript was first introduced. Since Android 3.0 Honeycomb (API 11), and definitely since Android 4.2 JellyBean (API 17), when ScriptIntrinsicYuvToRGB was added, the easiest and most efficient solution has been to use renderscript for YUV to RGB conversion. I have recently generalized this solution to handle device rotation.

Categories

Resources