How do I go about getting the pixel color of an RGBA texture? Say I have a function like this:
public Color getPixel(int x, int y) {
int r = ...
int g = ...
int b = ...
int a = ...
return new Color(r, g, b, a);
}
I'm having a hard time using glGetTexImage() to work;
int[] p = new int[size.x * size.y * 4];
ByteBuffer buffer = ByteBuffer.allocateDirect(size.x * size.y * 16);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
buffer.asIntBuffer().get(p);
for (int i = 0; i < p.length; i++) {
p[i] = (int) (p[i] & 0xFF);
}
But I don't know how to access a pixel with a given coordinate.
like this? hope this helps you :)
public Color getPixel(BufferedImage image, int x, int y) {
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() *
image.getHeight() * 4); //4 for RGBA, 3 for RGB
if (y <= image.getHeight() && x <= image.getWidth()){
int pixel = pixels[y * image.getWidth() + x];
int r=(pixel >> 16) & 0xFF); // Red
int g=(pixel >> 8) & 0xFF); // Green
int b=(pixel & 0xFF); // Blue
int a=(pixel >> 24) & 0xFF); // Alpha
return new Color(r,g,b,a)
}
else{
return new Color(0,0,0,1);
}
}
its not testet but should work
Here's what I did to accomplish this.
First, I set the pixels in a byte[] with glGetTexImage.
byte[] pixels = new byte[size.x * size.y * 4];
ByteBuffer buffer = ByteBuffer.allocateDirect(pixels.length);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
buffer.get(pixels);
Then, to get a pixel at a specific coordinate, here's the algorithm I used:
public Color getPixel(int x, int y) {
if (x > size.x || y > size.y) {
return null;
}
int index = (x + y * size.x) * 4;
int r = pixels[index] & 0xFF;
int g = pixels[index + 1] & 0xFF;
int b = pixels[index + 2] & 0xFF;
int a = pixels[index + 3] & 0xFF;
return new Color(r, g, b, a);
}
This returns a Color object with arguments ranging from 0-255, as expected.
Related
I am trying to convert ByteBuffer to Bitmap Image but the output I get is noisy i.e not what I had expected. My code is as follows:
private Bitmap getOutputImage(ByteBuffer output){
output.rewind();
int outputWidth = 384;
int outputHeight = 384;
Bitmap bitmap = Bitmap.createBitmap(outputWidth, outputHeight, Bitmap.Config.RGB_565);
int [] pixels = new int[outputWidth * outputHeight];
for (int i = 0; i < outputWidth * outputHeight; i++) {
//val a = 0xFF;
//float a = (float) 0xFF;
//val r: Float = output?.float!! * 255.0f;
//byte val = output.get();
float r = ((float) output.get()) * 255.0f;
//val g: Float = output?.float!! * 255.0f;
float g = ((float) output.get()) * 255.0f;
//val b: Float = output?.float!! * 255.0f;
float b = ((float) output.get()) * 255.0f;
//pixels[i] = a shl 24 or (r.toInt() shl 16) or (g.toInt() shl 8) or b.toInt()
pixels[i] = (((int) r) << 16) | (((int) g) << 8) | ((int) b);
}
bitmap.setPixels(pixels, 0, outputWidth, 0, 0, outputWidth, outputHeight);
return bitmap;
}
The out image I am getting is
Please advise me what is wrong here?
output.get() is read 1byte from buffer.
maybe you have to change output.get() to output.getFloat()
then well work.
this is my code.
ByteBuffer modelOutput = ByteBuffer.allocateDirect(200 * 200 * 3 * 4).order(ByteOrder.nativeOrder());
Interpreter tflite = getTfliteInterpreter("ESRGAN.tflite");
tflite.run(input, modelOutput);
modelOutput.rewind();
int outputWidth = 200;
int outputHeight = 200;
Bitmap bitmap2 = Bitmap.createBitmap(outputWidth, outputHeight, Bitmap.Config.ARGB_8888);
int [] pixels = new int[outputWidth * outputHeight];
for (int i = 0; i < outputWidth * outputHeight; i++) {
int a = 0xFF;
float r = (modelOutput.getFloat());
float g = (modelOutput.getFloat());
float b = (modelOutput.getFloat());
pixels[i] = a << 24 | ((int) r << 16) | ((int) g << 8) | (int) b;
}
bitmap2.setPixels(pixels, 0, outputWidth, 0, 0, outputWidth, outputHeight);
I have the following constructor for a RecoloredImage that takes an old image, and replaces every old colored pixel with a new colored pixel. However, the image doesn't actually change. The code between the comments is purely for testing purposes, and the resulting printed line is not at all the new color I want.
public RecoloredImaged(Image inputImage, Color oldColor, Color newColor) {
int width = (int) inputImage.getWidth();
int height = (int) inputImage.getHeight();
WritableImage outputImage = new WritableImage(width, height);
PixelReader reader = inputImage.getPixelReader();
PixelWriter writer = outputImage.getPixelWriter();
// -- testing --
PixelReader newReader = outputImage.getPixelReader();
// -- end testing --
int ob = (int) oldColor.getBlue() * 255;
int or = (int) oldColor.getRed() * 255;
int og = (int) oldColor.getGreen() * 255;
int nb = (int) newColor.getBlue() * 255;
int nr = (int) newColor.getRed() * 255;
int ng = (int) newColor.getGreen() * 255;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int argb = reader.getArgb(x, y);
int a = (argb >> 24) & 0xFF;
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = argb & 0xFF;
if (g == og && r == or && b == ob) {
r = nr;
g = ng;
b = nb;
}
argb = (a << 24) | (r << 16) | (g << 8) | b;
writer.setArgb(x, y, argb);
// -- testing --
String s = Integer.toHexString(newReader.getArgb(x, y));
if (!s.equals("0"))
System.out.println(s);
// -- end testing --
}
}
image = outputImage;
}
The cast operator has a higher precedence than the multiplication operator. Your calculations for the or, ..., nb values are therefore compiled to the same bytecode as this code:
int ob = ((int) oldColor.getBlue()) * 255;
int or = ((int) oldColor.getRed()) * 255;
int og = ((int) oldColor.getGreen()) * 255;
int nb = ((int) newColor.getBlue()) * 255;
int nr = ((int) newColor.getRed()) * 255;
int ng = ((int) newColor.getGreen()) * 255;
Just add brackets to tell java to do the multiplication before casting. Otherwise you'll only get values 0 or 255 as results.
int ob = (int) (oldColor.getBlue() * 255);
int or = (int) (oldColor.getRed() * 255);
int og = (int) (oldColor.getGreen() * 255);
int nb = (int) (newColor.getBlue() * 255);
int nr = (int) (newColor.getRed() * 255);
int ng = (int) (newColor.getGreen() * 255);
I'm trying to convert an image from YUV to RGB inside onImageAvailable method in java.
I'm using openCV for conversion.
I can't use RGB format from android Camera2 for avoiding frame loss.
I can't chose the best format for conversion.
Image.Plane Y = image.getPlanes()[0];
Image.Plane U = image.getPlanes()[1];
Image.Plane V = image.getPlanes()[2];
Y.getBuffer().position(0);
U.getBuffer().position(0);
V.getBuffer().position(0);
int Yb = Y.getBuffer().remaining();
int Ub = U.getBuffer().remaining();
int Vb = V.getBuffer().remaining();
ByteBuffer buffer = ByteBuffer.allocateDirect( Yb + Ub + Vb);
buffer.put(Y.getBuffer());
buffer.put(U.getBuffer());
buffer.put(V.getBuffer());
// Image is 640 x 480
Mat yuvMat = new Mat(960, 640, CvType.CV_8UC1);
yuvMat.put(0, 0, buffer.array());
// I don't know what is the correct format
Mat rgbMat = new Mat(yuvMat.rows, yuvMat.cols, CvType.CV_8UC4);
Imgproc.cvtColor(yuvMat, rgbMat, Imgproc.COLOR_YUV420sp2RGBA);
final Bitmap bit = Bitmap.createBitmap(rgbMat.cols(), rgbMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(rgbMat, bit);
Actually, I obtain only cropped grayscale image
Try this function:
void decodeYUV420SP( byte[] rgb, byte[] yuv420sp, int width, int height )
{
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143)
r = 262143;
if (g < 0) g = 0; else if (g > 262143)
g = 262143;
if (b < 0) b = 0; else if (b > 262143)
b = 262143;
//rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
int nIdx = ((width - i - 1) * height + height - j - 1) * 3;//device
//int nIdx = (i * height + j) * 3;//nox
rgb[nIdx] = (byte) (((r << 6) & 0xff0000)>>16);
rgb[nIdx+1] = (byte) (((g >> 2) & 0xff00)>>8);
rgb[nIdx+2] = (byte) ((b >> 10) & 0xff);
}
}
}
Use : decodeYUV420SP( rgb, camData, nWidth234, nHeight234 );
You can get RGB byte array;
If you need get the image from byte array, try this.
public boolean convertYunToJpeg(byte[] data, int width, int height){
YuvImage image = new YuvImage(data, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quailty = 20;
image.compressToJpeg(new Rect(0,0, width, height), quailty, baos);
byte[] jpegByteArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegByteArray, 0, jpegByteArray.length);
Matrix matrix = new Matrix();
matrix.postRotate(-90);
Bitmap lastbitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, true);
try {
File file = new File(BaseApplication.DIRECTORY + mCode + ".png");
if(!file.exists()){
RandomAccessFile me = new RandomAccessFile(BaseApplication.DIRECTORY + mCode + ".png", "rw");
me.writeInt(5);
me.close();
file = new File(BaseApplication.DIRECTORY + mCode + ".png");
}
FileOutputStream fos = new FileOutputStream(file);
lastbitmap.compress(Bitmap.CompressFormat.PNG, quailty, fos);
} catch (IOException e) {
e.printStackTrace();
return false;
}
return true;
}
I have the following code:
private static int pixelDiff(int rgb1, int rgb2) {
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = rgb1 & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = rgb2 & 0xff;
return Math.abs(r1 - r2) + Math.abs(g1 - g2) + Math.abs(b1 - b2);
}
and it works without a problem, but it takes to long and i don't know how it optimize it.
So the basic is, that i want to compare two images and get the percentage of difference.
Therefor I load the RGB of both images and compare them with this code.
My question: Is it possible to optimize this code, or do you have any idea to compare two images(not only that they are equal)
UPDATE:
here is the full code:
private double getDifferencePercent(BufferedImage img1, BufferedImage img2) {
int width = img1.getWidth();
int height = img1.getHeight();
int width2 = img2.getWidth();
int height2 = img2.getHeight();
if (width != width2 || height != height2) {
throw new IllegalArgumentException(String.format("Images must have the same dimensions: (%d,%d) vs. (%d,%d)", width, height, width2, height2));
}
long diff = 0;
for (int y = height - 1; y >= 0; y--) {
for (int x = width - 1; x >= 0; x--) {
diff += pixelDiff(img1.getRGB(x, y), img2.getRGB(x, y));
}
}
long maxDiff = 765L * width * height;
return 100.0 * diff / maxDiff;
}
private static int pixelDiff(int rgb1, int rgb2) {
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = rgb1 & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = rgb2 & 0xff;
return Math.abs(r1 - r2) + Math.abs(g1 - g2) + Math.abs(b1 - b2);
}
I checked this with a profiler and it shows that pixelDiff() is very slow.
How can I compress a LibGDX Pixmap? I want to save it on the disk, but it uses about 4MB, which is way to much and takes like forever to save it.
final Pixmap pixmap = ScreenUtils.getFrameBufferPixmap(x, y, w, h);
FileHandle screenshot = Gdx.files.local("something.png");
PixmapIO.writePNG(screenshot, pixmap);
I saw that there is a PixmapIO.writeCIM, which is quite fast and the output is quite small.
Am I able to display the something.cim file in Android? The docu says, that cim should only be used within libgdx. Arguing that this is the old documentation, maybe there is something new?
http://badlogicgames.com/forum/viewtopic.php?f=11&t=8947
int w = p.getWidth();
int h = p.getHeight();
int[] pixels = new int[w * h];
for (int y=0; y<h; y++) {
for (int x=0; x<w; x++) {
//convert RGBA to RGB
int value = p.getPixel(x, y);
int R = ((value & 0xff000000) >>> 24);
int G = ((value & 0x00ff0000) >>> 16);
int B = ((value & 0x0000ff00) >>> 8);
int A = ((value & 0x000000ff));
int i = x + (y * w);
pixels[ i ] = (A << 24) | (R << 16) | (G << 8) | B;
}
}
Bitmap b = Bitmap.createBitmap(pixels, w, h, Config.ARGB_8888);
b.compress(CompressFormat.JPEG, quality, handle.write(false));