Draw many pixel points android canvas or opengl? - java

I'm trying to implement my own mirroring protocol.
I'm using websockets to retrieve a compressed pixels buffer. After I decompress this buffer I got a big array of position with color that I should display.
I'm using canvas with a for loop on a huge arraylist (2 millions elements) in onDraw like following :
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
for (int i = 0; i < this.pixelsList.size(); i++) { //SIZE = 2.073.600 pixels
canvas.drawPoint(this.pixelsList.get(i).getPosition().x, this.pixelsList.get(i).getPosition().y, this.pixelsList.get(i).getPaint());
}
}
I'm looking for an efficient way to display this huge packet.
I was on OpenGL ES but I did not find any way to display one pixel in one position.. Maybe should I take a look with OpenCV ?

Don't dereference each pixel address and use a texture in OpenGL. This might be the fastest way.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture[texture_id].texture_width, texture[texture_id].texture_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelsList);
or if it's frequently updated
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, **pixelsList**);

Related

OpenGL Array Texture Not Using Correct Textures in Shader

Edit: I've tried both glAttribIPointer() and glAttribPointer() with the types of GL_INT and GL_UNSGINED_INT. Changing the value to a constant in the shader works (fragColor = fragColor = texture(samplerArr, vec3(uv, 2));), so passing the int to the shader seems to be the issue.
That title's a mouthfull, but I think it explains pretty well the issue I'm experiencing. I've got this code that creates an array texture and is (hopefully) correct, as I can see some images. If I ask for the 0th image, I get the correct image, but if I use a texture id of any value larger than 0, the last texture in the texture array is the image I receive, not the expected. This is the code for creating the texture (the images can be seen, so I know the image to byte conversion works):
texture = glGenTextures();
glBindTexture(GL_TEXTURE_2D_ARRAY, texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // 1 byte per component
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST); // Pixel perfect (with mipmapping)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA, w, h, images.length, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
try {
for (int i = 0; i < images.length; i ++) {
InputStream is = Texture.class.getResourceAsStream(images[i].getFullPath());
if (is == null) {
throw new FileNotFoundException("Could not locate texture file: " + images[i].getFullPath());
}
PNGDecoder img = new PNGDecoder(is);
ByteBuffer buff = ByteBuffer.allocateDirect(4 * img.getWidth() * img.getHeight());
img.decode(buff, img.getWidth() * 4, Format.RGBA);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4); // 1 byte per component
buff.flip();
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, i, w, h, 1, GL_RGBA, GL_UNSIGNED_BYTE, buff);
Debug.log("{} => {}", i, images[i].getFullPath());
}
glGenerateMipmap(GL_TEXTURE_2D_ARRAY);
} catch (Exception e) {
Debug.error("Unable to create texture from path: {}", path);
Debug.error(e, true);
}
I think most of that code is self explanatory, but I'd just like to make sure I'm doing it right. I then pass the texture id to the shader through this vertex attribute:
bindVertexArray();
glBindBuffer(GL_ARRAY_BUFFER, tbo);
glBufferData(GL_ARRAY_BUFFER, tBuff, GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 1, GL_INT, false, 0, 0);
glDisableVertexAttribArray(2);
unbindVertexBuffer();
memFree(tBuff);
unbindVertexArray();
I think that is also correct, as I can pass the texture ID through and receive an output. That leaves one piece that could be wrong (but I still think is right for the same reasons), and that's the fragment shader:
fragColor = texture(samplerArr, vec3(uv, textureId));
Texture id is defined as
flat in int textureId;
and the sampler is
uniform sampler2DArray samplerArr;
I even tried passing through a uniform with the total number of textures in the array and using
fragColor = texture(samplerArr, vec3(uv, float(textureId) / float(totalTextures)));
instead, but that didn't change any textures.
I don't see anything wrong with any of those pieces of code (except for the last one), but I am new to array textures (and OpenGL in general), so I was hoping someone out there has had or has solved this issue so they can guide me to the solution.

Process ALPHA_8 bitmap in Android and show it in ImageView

I'm implementing sobel edge detection algo. After processing I apply new pixel values, either 255 or 0. my problem is that the resulting bitmap is not showed in the imageView. I'm using Alpha_8 Configuration because it takes less memory and most suitable for edge results.
How can I preview the results?
The ALPHA_8 format was made to be used as a mask because it only contains alpha and no color information. You should convert it to another format or if you want to use it anyway then you should put it as a mask for background for example. Check out this response from an Android project member to a similar issue. You can also check out my response to another question, there I put a code example about how to apply a mask to another bitmap.
You can use ColorMatrixColorFilter to display ALPHA_8 bitmap.
For example, use the following matrix
mColorFilter = new ColorMatrixColorFilter(new float[]{
0, 0, 0, 1, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 255
});
to draw your bitmap using RED color.
public void onDraw(Canvas canvas){
mPaint.setColorFilter(mColorFilter);
canvas.drawBitmap(mBitmap,x,y,mPaint);
}

OpenGL - Why does my fbo/texture remain black?

*I've been trying my very best to implement renderable texture functionality using OpenGL's framebuffering together with the LWJGL library from Java. However, the result that I always get is a 100% **black ** texture.*
I'm simply asking for some advice of what the problem is. I'm not rendering any specific shapes. I bind my generated framebuffer and call a glClearColor(1, 0, 0, 1); and then a glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); and then unbind the framebuffer. But when I try to render the texture bound to the framebuffer, the texture only shows black, where it actually should be red, right?
Also, the glCheckFramebufferStatus() returns GL_FRAMEBUFFER_COMPLETE so I suppose that the error lies within the rendering part, rather than the initialization phase. But I'll show the initialization code anyways.
The initialization code:
public RenderableTexture initialize(int width, int height, int internalFormat, int[] attachments, boolean useDepthBuffer) {
if(!GLContext.getCapabilities().GL_EXT_framebuffer_object) {
System.err.println("FrameBuffers not supported on your graphics card!");
}
this.width = width;
this.height = height;
hasDepthBuffer = useDepthBuffer;
fbo = glGenFramebuffers();
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, (ByteBuffer) null);
if(useDepthBuffer) {
rbo = glGenRenderbuffers();
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo);
}
glFramebufferTexture2D(GL_FRAMEBUFFER, attachments[0], GL_TEXTURE_2D, id, 0);
int[] drawBuffers = new int[attachments.length];
for(int i = 0; i < attachments.length; i++)
if(attachments[i] == GL_DEPTH_ATTACHMENT)
drawBuffers[i] = GL_NONE;
else
drawBuffers[i] = attachments[i];
glDrawBuffers(Util.toIntBuffer(drawBuffers));
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
System.err.println("Warning! Incomplete Framebuffer");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
return this;
}
internalFormat has the value of GL_RGBA8 and width and height have the value of 512 and 512. attachments[] only contains 1 value and that's GL_COLOR_ATTACHMENT0. useDepthBuffer is set to true.
The code above is only called once.
This is the rendering code:
public RenderManager draw() {
glClearColor(bg.x, bg.y, bg.z, bg.w);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
texture.bindAsRenderTarget(true);
texture.releaseRenderTarget();
quad.draw();
return this;
}
I set the clear color to black (0, 0, 0, 1) and then clear the screen. I then call texture.bindAsRenderTarget(true);. The texture object is the one who contains the initialize method from above so some variables are shared between that method and bindAsRenderTarget().
This method looks like this:
public RenderableTexture bindAsRenderTarget(boolean clear) {
glViewport(0, 0, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
glClearColor(1, 0, 0, 1f);
if(clear)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
return this;
}
As you can see I adjust the viewport to the size of the texture / framebuffer. I then bind the framebuffer and set the clear color to red. Then, since I passed true in the rendering code, it (as i believe) clears the currently bound framebuffer to red.
texture.releaseRenderTarget(); adjusts the viewport to fit the display and then calls glBindFramebuffer(GL_FRAMEBUFFER, 0);
The final line of code quad.draw(); simply binds the textureID of the texture bound to the framebuffer and then draws a simple quad with it.
That's pretty much all there is.
I can assure you that I'm rendering the quad correctly, since I can bind textures loaded from PNG files to it and the texture is successfully shown.
So to make things clear, the main question is pretty much:
Why on earth is the texture black after the clear as it should be red? Where and what am I doing wrong?
EDIT: I have a feeling that it might have to do with something about the bounding of different gl ojects. Does the renderbuffer have to be bound at the point of rendering to it's framebuffer? Does it not? Does it matter? How about the texture? at what points should they be?
I did something very stupid. The class that I initialized the fbo texture within (RenderableTextue.class) was a subclass of Texture.class. The binding method including the textureID was supposed to be inherited from the Texture class as I had declared the id variable as protected. However, I had accidently created a local private variable within the subclass, and thus, when generating the texture, saving the textureID to the local id variable and when binding, using the uninitialized id from the superclass. Sorry for anyone trying to solve this without being able to do so :s

(Java LibGDX) How do I resize my textures in LibGDX?

I have been fooling around with LibGDX for a while now, and wanted to easily port my programs to different systems. I have a background texture, which I want to scale to the currently used resolution. The image is 1920x1080, how do I change it to the currently used resolution at runtime?
If you want to scale at drawing time use:
Pixmap pixmap200 = new Pixmap(Gdx.files.internal("200x200.png"));
Pixmap pixmap100 = new Pixmap(100, 100, pixmap200.getFormat());
pixmap100.drawPixmap(pixmap200,
0, 0, pixmap200.getWidth(), pixmap200.getHeight(),
0, 0, pixmap100.getWidth(), pixmap100.getHeight()
);
Texture texture = new Texture(pixmap100);
pixmap200.dispose();
pixmap100.dispose();
From:
https://www.snip2code.com/Snippet/774713/LibGDX-Resize-texture-on-load
batch.begin();
batch.draw(yourtexture, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.end();
Using the camera viewport also works:
batch.begin();
batch.draw(yourtexture, 0, 0, cam.viewportWidth, cam.viewportHeight);
batch.end();

Java OpenGL screen sized texture mapped quad

I have a Java OpenGL (JOGL) app, and I'm trying to create a texture mapped quad that covers the entire screen. In draw some pixels to a buffer and then I want to read those pixels into a texture and redraw them on screen (with a fragment shader applied). My code for mapping the texture to the viewport is:
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrtho( 0, width, height, 0, -1, 1 );
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
IntBuffer ib = IntBuffer.allocate(1);
gl.glEnable(GL.GL_TEXTURE_2D);
gl.glGenTextures(1, ib);
gl.glPixelStorei(GL.GL_PACK_ALIGNMENT, 1);
//buff contains pixels read from glReadPixels
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, width, height, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, buff);
gl.glBindTexture(GL.GL_TEXTURE_2D, ib.get(0));
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(0,1);
gl.glVertex2f(0,0);
gl.glTexCoord2f(0,0);
gl.glVertex2f(0,height);
gl.glTexCoord2f(1,0);
gl.glVertex2f(width,height);
gl.glTexCoord2f(1,1);
gl.glVertex2f(width,0);
gl.glEnd();
gl.glBindTexture(GL.GL_TEXTURE_2D, 0);
gl.glPopMatrix();
gl.glPopMatrix();
The end result is a quad that is not covering the whole viewport (it's partially on) and that does not contain the pixels from the buffer. What am I doing incorrectly here?
thanks,
Jeff
First, you should only create the texture in your initialization code. You should not be calling glTexImage2D every frame. Only call glTexImage2D again if the size of the texture changes; glTexSubImage2D can be used to upload data to the texture. Think of glTexImage2D as "new", while glTexSubImage2D as a memory copy.
Do this once, after initializing OpenGL.
IntBuffer ib = IntBuffer.allocate(1); //Store this in your object
gl.glGenTextures(1, ib);
gl.glPixelStorei(GL.GL_PACK_ALIGNMENT, 1);
//buff contains pixels read from glReadPixels
gl.glBindTexture(GL.GL_TEXTURE_2D, ib.get(0));
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, width, height, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, 0);
gl.glBindTexture(GL.GL_TEXTURE_2D, 0);
Then, each frame, do this:
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glBindTexture(GL.GL_TEXTURE_2D, ib.get(0)); //Retrieved from your object
gl.glEnable(GL.GL_TEXTURE_2D);
gl.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, width, height, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, buff);
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(0,1);
gl.glVertex2f(-1, -1);
gl.glTexCoord2f(0, 0);
gl.glVertex2f(-1, 1);
gl.glTexCoord2f(1, 0);
gl.glVertex2f(1, 1);
gl.glTexCoord2f(1, 1);
gl.glVertex2f(1, -1);
gl.glEnd();
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPopMatrix();
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL.GL_MODELVIEW);
By using identity for projection and modelview, we are able to supply vertex coordinates directly in clip-space. The [-1, 1] range in clip-space maps to [0, width/height] in window space. So we don't have to know or care about how big the window is; as long as the glViewport was set up correctly, this should work.
It may not be the problem, but it won't be helping: You are popping the modelview matrix twice for a single push. You are not popping the projection matrix at all.
I would recommend setting the projection matrix once at startup, without doing any pushes or pops. You don't really need to push and pop the modelview matrix either. (You could do your texture setup once at startup, too.)
I would start with checking glError with code like the below. Note I used the GL2 object because there were some issues with older versions of JOGL and the GL object, silly things like GL_QUADS not being there.
If you have a shader enabled with the above code, you need to do the texturing by reading the sampler. If so, please attach the shader code you are using with this rendering code.
private static void checkForGLErrors(GL2 gl) {
int errno = gl.glGetError();
switch (errno) {
case GL2.GL_INVALID_ENUM:
System.err.println("OpenGL Error: Invalid ENUM");
break;
case GL2.GL_INVALID_VALUE:
System.err.println("OpenGL Error: Invalid Value");
break;
case GL2.GL_INVALID_OPERATION:
System.err.println("OpenGL Error: Invalid Operation");
break;
case GL2.GL_STACK_OVERFLOW:
System.err.println("OpenGL Error: Stack Overflow");
break;
case GL2.GL_STACK_UNDERFLOW:
System.err.println("OpenGL Error: Stack Underflow");
break;
case GL2.GL_OUT_OF_MEMORY:
System.err.println("OpenGL Error: Out of Memory");
break;
default:
return;
}
}
I would also try to avoid generating the texture every frame if it is something that doesn't change. You can save the textureId and bind it later.

Categories

Resources