Reading Buffers in Java - java

I need a bit of a hand reading the buffer that is spat out by the glReadPixels feature in android's opengl-es api. Here is my code so far...
public static void pick(GL11 gl){
int[] viewport = new int[4];
IntBuffer pixel = IntBuffer.allocate(384000);
mColourR = BaseObject.getColourR();
mColourG = BaseObject.getColourG();
mColourB = BaseObject.getColourB();
x = MGLSurfaceView.X();
y = MGLSurfaceView.Y();
gl.glGetIntegerv(GL11.GL_VIEWPORT,viewport,0);
gl.glReadPixels((int)x,viewport[3]-(int)y, 1, 1, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, pixel);
}
the name of the output buffer in this code is "pixel" what do I need to add to this code to get the colour values back from the "pixel" buffer.

You can use one of the get() methods of the IntBuffer to access individual values.
RGB color values are usually stored in that very order, so calling pixel.get(0) will get you the red value of the first pixel, pixel.get(1) will get you the green channel and so on. Usually, the values are stored line-wise.
So, if you need a value for a particular pixel, (x,y) you will have to call get(screenWidth*3*y + x)
By the way, you can retrieve the raw int array from your IntBufferby calling pixels.array()

Related

How to efficiently display raw rgb int array in javafx?

I have an int array where each value stores a bitpacked rgb value (8 bits per channel) and alpha is always 255(opaque) and i want to display that in javafx.
My current approach is using a canvas like this:
GraphicsContext graphics = canvas.getGraphicsContext2D();
PixelWriter pw = graphics.getPixelWriter();
pw.setPixels(0, 0, width, height, PixelFormat.getIntArgbInstance(), pixels, 0, width);
However before that i actually have to set the alpha component of each pixel by iterating each pixel and OR'ing it with a mask that turns the pixel from rgb to argb like this:
for (int i = 0; i < pixels.length; i++) {
pixels[i] = 0xFF000000 | pixels[i];
}
Is there a more efficient to do this (as the pixels array is updated many times every second)?
I was hoping there's a IntRgbInstance but unfortunately there isn't (only ByteRgbInstance)
Other approaches i've tested:
Approach 1: Creating a IntBuffer that is filled up like this:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
And then generating a PixelBuffer that uses this buffer, the pixel buffer is then used as an input to this WritableImage constructor: https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/WritableImage.html#%3Cinit%3E(javafx.scene.image.PixelBuffer)
and then i display that WritableImage using a ImageView
This however still didn't speed up anything(rather made it a bit slower) and im guessing that because i have to construct a new WritableImage instance each time the pixels int array is updated.
Approach 2 (that didn't work for some reason, i.e. it displayed nothing in the screen): Creating a buffer the same way as above and using that in one of the setPixels() methods that takes in a buffer:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
pw.setPixels(0, 0, width, height, PixelFormat.getIntArgbInstance(), buffer, width);
After a bit of more research i found out that i don't need to create a new WritableImage instance each time the pixels array is updated but i can just use the updateBuffer method here: https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/PixelBuffer.html#updateBuffer(javafx.util.Callback)
So the code currently looks like this:
pb.updateBuffer(callback -> {
buffer.clear();
for (int pixel : pixels) {
buffer.put(0xFF000000 | pixel);
}
return null;
});
Where pb, buffer is only created once like this:
IntBuffer buffer = IntBuffer.allocate(pixels.length * 4);
PixelBuffer<IntBuffer> pb = new PixelBuffer<>(width, height, buffer, PixelFormat.getIntArgbPreInstance());
view.setImage(new WritableImage(pb));
and this did indeed result in a nice speedup (close to 2x compared to my initial approach)
Maybe this https://openjfx.io/javadoc/17/javafx.graphics/javafx/scene/image/WritableImage.html#%3Cinit%3E(javafx.scene.image.PixelBuffer) is what you are looking for. You could create a PixelBuffer from an IntBuffer of your data.

Multiplying Pixel Values in BufferedImage results in strange Behaviour

I am currently working on a program to help photographers with the creation of timelapses.
It calculates an decline or rise in brightness over a series of images. So the change in Exposure and iso for example dont affect the overall decline in brightness.
For this i use a simple Swing-based Interface which displays the first and last image. Under them are sliders to adjust the Brightness of the image.
This is applied via a direct manipulation of the BufferedImages underlying DataBuffer.
Mostly this works but i encountered some images which seem to have kind of a problem.
Do you have an idea why this is happening?
public BufferedImage getImage(float mult){
BufferedImage retim;
retim = new BufferedImage(img.getWidth(), img.getHeight(), img.getType());
Graphics g = retim.getGraphics();
g.drawImage(img, 0, 0, null);
g.dispose();
DataBufferByte db = (DataBufferByte) retim.getRaster().getDataBuffer();
byte[] bts = db.getData();
for(int i=0;i<bts.length;i++){
float n = bts[i]*mult;
if(n > 255){
bts[i]= (byte) 255;
}else{
bts[i] = (byte) n;
}
}
return retim;
}
This is the method which takes an float and multiplies every pixel in the image with it. (And some code to prevent the byte values from overflowing).
This is the unwanted behaviour (on the left) and the expected on the right.
Your problem is this line, and it occurs due to the fact that Java bytes are signed (in the range [-128...127]):
float n = bts[i] * mult;
After the multiplication, your n variable may be negative, thus causing the overflow to occur.
To fix it, use a bit mask to get the value as an unsigned integer (in the range [0...255]), before multiplying with the constant:
float n = (bts[i] & 0xff) * mult;
A better fix yet, is probably to use the RescaleOp, which is built to do brightness adjustments on BufferedImages.
Something like:
public BufferedImage getImage(float mult) {
return new RescaleOp(mult, 0, null).filter(img, null);
}
This is due to the capping of the value in certain bytes in the image.
For example (assuming RGB simple colour space):
The pixel starts at (125,255,0), if you multiply by factor 2.0, the result is (255,255,0). This is a different hue than the original.
This is also why the strange results only occur on pixels that already have high brightness to start with.
This link may help with better algorithm for adjusting brightness.
You could also refer to this related question.

Convert color object to readable array

I want to make a negative of an image in Java, but I'm not sure how to convert a Color object into an array which can be manipulated. Here's a snippet of my code:
Color col;
col = picture.getPixel(x,y).getColor();
//x and y are from a for loop
picture.getPixel(x,y).setColor(~~~);
setColor takes three integers, one for each color channel RBG. I want to convert Color col to an array which I can read. Something like the below:
picture.getPixel(x,y).setColor(255-col[0],255-col[1],255-col[2]);
255-col[n] of course creates a negative of the pixel, but Color col is not an array when I'd like to access it as one. How can I cast a Color object as an array?
I could do something like the below and not use a Color object at all,
r = picture.getPixel(x,y).getRed(); //r is now an integer 0-255
//repeat the above for green and blue
picture.getPixel(x,y).setColor(r,g,b);
But I'd much rather do it in one line.
What about :
int [] arrayRGB = new int[3];
arrayRGB[0] = col.getRed();
arrayRGB[1] = col.getGreen();
arrayRGB[2] = col.getBlue();
Or directly :
picture.getPixel(x,y).setColor(255-col.getRed(),255-col.getGreen(),255-col.getBlue());
Take a look at the Color class.
You cannot cast Color as an array, but you can get it's components as an array:
int[] rgb = new int[] { col.getRed(), col.getGreen(), col.getBlue() };
You might want to just use these directly.

Android Camera Preview YUV format into RGB on the GPU

I have copy pasted some code I found on stackoverflow to convert the default camera preview YUV into RGB format and then uploaded it to OpenGL for processing.
That worked fine, the issue is that most of the CPU was busy at converting the YUV images into the RGB format and it turned into the bottle neck.
I want to upload the YUV image into the GPU and then convert it into RGB in a fragment shader.
I took the same Java YUV to RGB function I found which worked on the CPU and tried to make it work on the GPU.
It turned to be quite a little nightmare, since there are several differences on doing calculations on Java and the GPU.
First, the preview image comes in byte[] in Java, but bytes are signed, so there might be negative values.
In addition, the fragment shader normally deals with [0..1] floating values for instead of a byte.
I am sure this is solveable and I almost solved it. But I spent a few hours trying to figure out what I was doing wrong and couldn't make it work.
Bottom line, I ask for someone to just write this shader function and preferably test it. For me it would be a tedious monkey job since I don't really understand why this conversion works the way it is, and I just try to mimic the same function on the GPU.
This is a very similar function to what I used on Java:
Displaying YUV Image in Android
What I did some of the job on the CPU, such as turnning the 1.5*wh bytes YUV format into a wh*YUV, as follows:
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (int) yuv420sp[yp]+127;
if ((i & 1) == 0) {
v = (int)yuv420sp[uvp++]+127;
u = (int)yuv420sp[uvp++]+127;
}
rgba[yp] = 0xFF000000+(y<<16) | (u<<8) | v;
}
}
}
I added 127 because byte is signed.
I then loaded the rgba into a OpenGL texture and tried to do the rest of the calculation on the GPU.
Any help would be appreaciated...
I used this code from wikipedia to calculate the conversion from YUV to RGB on the GPU:
private static int convertYUVtoRGB(int y, int u, int v) {
int r,g,b;
r = y + (int)1.402f*v;
g = y - (int)(0.344f*u +0.714f*v);
b = y + (int)1.772f*u;
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (b<<16) | (g<<8) | r;
}
I converted the floats to 0.0..255.0 and then use the above code.
The part on the CPU was to rearrange the original YUV pixels into a YUV matrix(also shown in wikipdia).
Basically I used the wikipedia code and did the simplest float<->byte conersions to make it work out.
Small mistakes like adding 16 to Y or not adding 128 to U and V would give undesirable results. So you need to take care of it.
But it wasn't a lot of work once I used the wikipedia code as the base.
Converting on CPU sounds easy but I believe question is how to do it on GPU?
I did it recently in my project where I needed to get very fast QR code detection even when camera angle is 45 degrees to surface where code is printed, and it worked with great performance:
(following code is trimmed just to contain key lines, it is assumed that you have both Java and OpenGLES solid understanding)
Create a GL texture that will contain stored Camera image:
int[] txt = new int[1];
GLES20.glGenTextures(1,txt,0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,txt[0]);
GLES20.glTextParameterf(... set min filter to GL_LINEAR );
GLES20.glTextParameterf(... set mag filter to GL_LINEAR );
GLES20.glTextParameteri(... set wrap_s to GL_CLAMP_TO_EDGE );
GLES20.glTextParameteri(... set wrap_t to GL_CLAMP_TO_EDGE );
Pay attention that texture type is not GL_TEXTURE_2D. This is important, since only a GL_TEXTURE_EXTERNAL_OES type is supported by SurfaceTexture object, which will be used in the next step.
Setup SurfaceTexture:
SurfaceTexture surfTex = new SurfaceTeture(txt[0]);
surfTex.setOnFrameAvailableListener(this);
Above assumes that 'this' is an object that implements 'onFrameAvailable' function.
public void onFrameAvailable(SurfaceTexture st)
{
surfTexNeedUpdate = true;
// this flag will be read in GL render pipeline
}
Setup camera:
Camera cam = Camera.open();
cam.setPreviewTexture(surfTex);
This Camera API is deprecated if you target Android 5.0, so if you are, you have to use new CameraDevice API.
In your render pipeline, have following block to check if camera has frame available, and update surface texture with it. When surface texture is updated, will fill in GL texture that is linked with it.
if( surfTexNeedUpdate )
{
surfTex.updateTexImage();
surfTexNeedUpdate = false;
}
To bind GL texture which has Camera -> SurfaceTeture link to, just do this in rendering pipe:
GLES20.glBindTexture(GLES20.GL_TEXTURE_EXTERNAL_OS, txt[0]);
Goes without saying, you need to set current active texture.
In your GL shader program which will use above texture in it's fragment part, you must have first line:
#extension GL_OES_EGL_imiage_external : require
Above is a must-have.
Texture uniform must be samplerExternalOES type:
uniform samplerExternalOES u_Texture0;
Reading pixel from it is just like from GL_TEXTURE_2D type, and UV coordinates are in same range (from 0.0 to 1.0):
vec4 px = texture2D(u_Texture0, v_UV);
Once you have your render pipeline ready to render a quad with above texture and shader, just start the camera:
cam.startPreview();
You should see quad on your GL screen with live camera feed. Now you just need to grab the image with glReadPixels:
GLES20.glReadPixels(0,0,width,height,GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bytes);
Above line assumes that your FBO is RGBA, and that bytes is already initialized byte[] array to proper size, and that width and height are size of your FBO.
And voila! You have captured RGBA pixels from camera instead of converting YUV bytes received in onPreviewFrame callback...
You can also use RGB framebuffer object and avoid alpha if you don't need it.
It is important to note that camera will call onFrameAvailable in it's own thread which is not your GL render pipeline thread, thus you should not perform any GL calls in that function.
In February 2011, Renderscript was first introduced. Since Android 3.0 Honeycomb (API 11), and definitely since Android 4.2 JellyBean (API 17), when ScriptIntrinsicYuvToRGB was added, the easiest and most efficient solution has been to use renderscript for YUV to RGB conversion. I have recently generalized this solution to handle device rotation.

Convert jpeg/png to an array of pixels in java

How can I convert a string containing a jpeg or png to an array (preferably one dimensional) of pixels? Ideally using classes built into java?
It turns out you need commons-fileupload. Look at the user guide for how to obtain the image InputStream. From there you can simply call:
BufferedImage image = ImageIO.read(item.getInputStream());
From here on there are many ways:
loop over the image dimensions and for each x and y call int rgb = image.getRGB(x, y);
same as above, but call getRed(x, y), getGreen(x, y), getBlue(x, y)
get the ColorModel and call the above methods there
call getRGB(startX, startY, w, h, rgbArray, offset, scansize)
call getData(), which returns a Raster, and call getPixes(..) there
Use PixelGraber. It returns one-dimensional array of RGB data.

Categories

Resources