I have the following constructor for a RecoloredImage that takes an old image, and replaces every old colored pixel with a new colored pixel. However, the image doesn't actually change. The code between the comments is purely for testing purposes, and the resulting printed line is not at all the new color I want.
public RecoloredImaged(Image inputImage, Color oldColor, Color newColor) {
int width = (int) inputImage.getWidth();
int height = (int) inputImage.getHeight();
WritableImage outputImage = new WritableImage(width, height);
PixelReader reader = inputImage.getPixelReader();
PixelWriter writer = outputImage.getPixelWriter();
// -- testing --
PixelReader newReader = outputImage.getPixelReader();
// -- end testing --
int ob = (int) oldColor.getBlue() * 255;
int or = (int) oldColor.getRed() * 255;
int og = (int) oldColor.getGreen() * 255;
int nb = (int) newColor.getBlue() * 255;
int nr = (int) newColor.getRed() * 255;
int ng = (int) newColor.getGreen() * 255;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int argb = reader.getArgb(x, y);
int a = (argb >> 24) & 0xFF;
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = argb & 0xFF;
if (g == og && r == or && b == ob) {
r = nr;
g = ng;
b = nb;
}
argb = (a << 24) | (r << 16) | (g << 8) | b;
writer.setArgb(x, y, argb);
// -- testing --
String s = Integer.toHexString(newReader.getArgb(x, y));
if (!s.equals("0"))
System.out.println(s);
// -- end testing --
}
}
image = outputImage;
}
The cast operator has a higher precedence than the multiplication operator. Your calculations for the or, ..., nb values are therefore compiled to the same bytecode as this code:
int ob = ((int) oldColor.getBlue()) * 255;
int or = ((int) oldColor.getRed()) * 255;
int og = ((int) oldColor.getGreen()) * 255;
int nb = ((int) newColor.getBlue()) * 255;
int nr = ((int) newColor.getRed()) * 255;
int ng = ((int) newColor.getGreen()) * 255;
Just add brackets to tell java to do the multiplication before casting. Otherwise you'll only get values 0 or 255 as results.
int ob = (int) (oldColor.getBlue() * 255);
int or = (int) (oldColor.getRed() * 255);
int og = (int) (oldColor.getGreen() * 255);
int nb = (int) (newColor.getBlue() * 255);
int nr = (int) (newColor.getRed() * 255);
int ng = (int) (newColor.getGreen() * 255);
Related
I am trying to convert ByteBuffer to Bitmap Image but the output I get is noisy i.e not what I had expected. My code is as follows:
private Bitmap getOutputImage(ByteBuffer output){
output.rewind();
int outputWidth = 384;
int outputHeight = 384;
Bitmap bitmap = Bitmap.createBitmap(outputWidth, outputHeight, Bitmap.Config.RGB_565);
int [] pixels = new int[outputWidth * outputHeight];
for (int i = 0; i < outputWidth * outputHeight; i++) {
//val a = 0xFF;
//float a = (float) 0xFF;
//val r: Float = output?.float!! * 255.0f;
//byte val = output.get();
float r = ((float) output.get()) * 255.0f;
//val g: Float = output?.float!! * 255.0f;
float g = ((float) output.get()) * 255.0f;
//val b: Float = output?.float!! * 255.0f;
float b = ((float) output.get()) * 255.0f;
//pixels[i] = a shl 24 or (r.toInt() shl 16) or (g.toInt() shl 8) or b.toInt()
pixels[i] = (((int) r) << 16) | (((int) g) << 8) | ((int) b);
}
bitmap.setPixels(pixels, 0, outputWidth, 0, 0, outputWidth, outputHeight);
return bitmap;
}
The out image I am getting is
Please advise me what is wrong here?
output.get() is read 1byte from buffer.
maybe you have to change output.get() to output.getFloat()
then well work.
this is my code.
ByteBuffer modelOutput = ByteBuffer.allocateDirect(200 * 200 * 3 * 4).order(ByteOrder.nativeOrder());
Interpreter tflite = getTfliteInterpreter("ESRGAN.tflite");
tflite.run(input, modelOutput);
modelOutput.rewind();
int outputWidth = 200;
int outputHeight = 200;
Bitmap bitmap2 = Bitmap.createBitmap(outputWidth, outputHeight, Bitmap.Config.ARGB_8888);
int [] pixels = new int[outputWidth * outputHeight];
for (int i = 0; i < outputWidth * outputHeight; i++) {
int a = 0xFF;
float r = (modelOutput.getFloat());
float g = (modelOutput.getFloat());
float b = (modelOutput.getFloat());
pixels[i] = a << 24 | ((int) r << 16) | ((int) g << 8) | (int) b;
}
bitmap2.setPixels(pixels, 0, outputWidth, 0, 0, outputWidth, outputHeight);
I have the following code:
private static int pixelDiff(int rgb1, int rgb2) {
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = rgb1 & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = rgb2 & 0xff;
return Math.abs(r1 - r2) + Math.abs(g1 - g2) + Math.abs(b1 - b2);
}
and it works without a problem, but it takes to long and i don't know how it optimize it.
So the basic is, that i want to compare two images and get the percentage of difference.
Therefor I load the RGB of both images and compare them with this code.
My question: Is it possible to optimize this code, or do you have any idea to compare two images(not only that they are equal)
UPDATE:
here is the full code:
private double getDifferencePercent(BufferedImage img1, BufferedImage img2) {
int width = img1.getWidth();
int height = img1.getHeight();
int width2 = img2.getWidth();
int height2 = img2.getHeight();
if (width != width2 || height != height2) {
throw new IllegalArgumentException(String.format("Images must have the same dimensions: (%d,%d) vs. (%d,%d)", width, height, width2, height2));
}
long diff = 0;
for (int y = height - 1; y >= 0; y--) {
for (int x = width - 1; x >= 0; x--) {
diff += pixelDiff(img1.getRGB(x, y), img2.getRGB(x, y));
}
}
long maxDiff = 765L * width * height;
return 100.0 * diff / maxDiff;
}
private static int pixelDiff(int rgb1, int rgb2) {
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = rgb1 & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = rgb2 & 0xff;
return Math.abs(r1 - r2) + Math.abs(g1 - g2) + Math.abs(b1 - b2);
}
I checked this with a profiler and it shows that pixelDiff() is very slow.
So I have the image of a map. And I have an image of an overlay which I want to place over the map such that you can see the map primarily but also the overlay. However on a pixel by pixel basis, I do not know how to evaluate the color of the resultant picture for it to be a mix of both the map and the overlay. For explanation purposes I made the following image:
You can see The carrots mainly, but you can also see Donald Trump over top of the carrots. So given the RGB values (or HSV, whichever one is more useful) of each individual pixel of the whole image, how would I combine them in such a way that I can see both?
The way I ended up solving this was by turning the RGB value of each pixel into an HSV color through the formula on http://www.rapidtables.com/convert/color/rgb-to-hsv.htm. Then I combined the Hue, Saturation, and Value values for each of the colors with whatever ratio I wanted and turned that HSV value back into RGB. I found that if I want to see one object more clearly than the other, you should preserve that object's Value value more. So since in my project I want to see the map more than the overlay, I matched the final Value value with that of the original map image.
Here's my code for the HSV color.
public static class HSVColor {
double[] HSV = new double[3];
int[] RGB = new int[3];
int RGBV;
//Constructor using the formula from http://www.rapidtables.com/convert/color/rgb-to-hsv.htm
public HSVColor(int R, int G, int B) {
RGB[0] = R;
RGB[1] = G;
RGB[2] = B;
RGBV = (256 * 256) * R + 256 * G + B;
double r = (double) R / 255.0;
double g = (double) G / 255.0;
double b = (double) B / 255.0;
double min = Math.min(r, Math.min(g, b));
double max = Math.max(r, Math.max(g, b));
double delta = max - min;
if (r == max) {
HSV[0] = 60 * (((g - b) / delta) % 6);
} else if (g == max) {
HSV[0] = 60 * (((b - r) / delta) + 2);
} else if (b == max) {
HSV[0] = 60 * (((r - g) / delta) + 4);
} else {
HSV[0] = 0;
}
if (max == 0) {
HSV[1] = 0;
} else {
HSV[1] = delta / max;
}
HSV[2] = max;
}
//Constructor using the formula from http://www.rapidtables.com/convert/color/hsv-to-rgb.htm
public HSVColor(double H, double S, double V) {
HSV[0] = H;
HSV[1] = S;
HSV[2] = V;
double C = V * S;
double X = C * (1 - Math.abs(((H / 60) % 2) - 1));
double m = V - C;
double r;
double g;
double b;
if (H < 60) {
r = C;
g = X;
b = 0;
} else if (H < 120) {
r = X;
g = C;
b = 0;
} else if (H < 180) {
r = 0;
g = C;
b = X;
} else if (H < 240) {
r = 0;
g = X;
b = C;
} else if (H < 300) {
r = X;
g = 0;
b = C;
} else {
r = C;
g = 0;
b = X;
}
RGB[0] = (int) ((r + m) * 255);
RGB[1] = (int) ((g + m) * 255);
RGB[2] = (int) ((b + m) * 255);
RGBV = (256 * 256) * RGB[0] + 256 * RGB[1] + RGB[2];
}
}
How do I go about getting the pixel color of an RGBA texture? Say I have a function like this:
public Color getPixel(int x, int y) {
int r = ...
int g = ...
int b = ...
int a = ...
return new Color(r, g, b, a);
}
I'm having a hard time using glGetTexImage() to work;
int[] p = new int[size.x * size.y * 4];
ByteBuffer buffer = ByteBuffer.allocateDirect(size.x * size.y * 16);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
buffer.asIntBuffer().get(p);
for (int i = 0; i < p.length; i++) {
p[i] = (int) (p[i] & 0xFF);
}
But I don't know how to access a pixel with a given coordinate.
like this? hope this helps you :)
public Color getPixel(BufferedImage image, int x, int y) {
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() *
image.getHeight() * 4); //4 for RGBA, 3 for RGB
if (y <= image.getHeight() && x <= image.getWidth()){
int pixel = pixels[y * image.getWidth() + x];
int r=(pixel >> 16) & 0xFF); // Red
int g=(pixel >> 8) & 0xFF); // Green
int b=(pixel & 0xFF); // Blue
int a=(pixel >> 24) & 0xFF); // Alpha
return new Color(r,g,b,a)
}
else{
return new Color(0,0,0,1);
}
}
its not testet but should work
Here's what I did to accomplish this.
First, I set the pixels in a byte[] with glGetTexImage.
byte[] pixels = new byte[size.x * size.y * 4];
ByteBuffer buffer = ByteBuffer.allocateDirect(pixels.length);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
buffer.get(pixels);
Then, to get a pixel at a specific coordinate, here's the algorithm I used:
public Color getPixel(int x, int y) {
if (x > size.x || y > size.y) {
return null;
}
int index = (x + y * size.x) * 4;
int r = pixels[index] & 0xFF;
int g = pixels[index + 1] & 0xFF;
int b = pixels[index + 2] & 0xFF;
int a = pixels[index + 3] & 0xFF;
return new Color(r, g, b, a);
}
This returns a Color object with arguments ranging from 0-255, as expected.
I want to read in a JPEG image with a uniform gray background with several colored balls on it of the same size. I want a program which can take this image and record the coordinates of each ball. What's the best way to do this?
I agree with James. I used the following program once to find red boxes in an image (before most of the red boxes were recolored by the community):
/**
* #author karnokd, 2008.11.07.
* #version $Revision 1.0$
*/
public class RedLocator {
public static class Rect {
int x;
int y;
int x2;
int y2;
}
static List<Rect> rects = new LinkedList<Rect>();
static boolean checkRect(int x, int y) {
for (Rect r : rects) {
if (x >= r.x && x <= r.x2 && y >= r.y && y <= r.y2) {
return true;
}
}
return false;
}
public static void main(String[] args) throws Exception {
BufferedImage image = ImageIO.read(new File("fallout3worldmapfull.png"));
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
int c = image.getRGB(x,y);
int red = (c & 0x00ff0000) >> 16;
int green = (c & 0x0000ff00) >> 8;
int blue = c & 0x000000ff;
// check red-ness
if (red > 180 && green < 30 && blue < 30) {
if (!checkRect(x, y)) {
int tmpx = x;
int tmpy = y;
while (red > 180 && green < 30 && blue < 30) {
c = image.getRGB(tmpx++,tmpy);
red = (c & 0x00ff0000) >> 16;
green = (c & 0x0000ff00) >> 8;
blue = c & 0x000000ff;
}
tmpx -= 2;
c = image.getRGB(tmpx,tmpy);
red = (c & 0x00ff0000) >> 16;
green = (c & 0x0000ff00) >> 8;
blue = c & 0x000000ff;
while (red > 180 && green < 30 && blue < 30) {
c = image.getRGB(tmpx,tmpy++);
red = (c & 0x00ff0000) >> 16;
green = (c & 0x0000ff00) >> 8;
blue = c & 0x000000ff;
}
Rect r = new Rect();
r.x = x;
r.y = y;
r.x2 = tmpx;
r.y2 = tmpy - 2;
rects.add(r);
}
}
}
}
}
}
Might give you some hints. The image originates from here.
You can use the ImageIO library to read in an image by using one of the read() methods. This produces a BufferedImage which you can analyze for the separate colors. getRGB() is probably the best way to do this. You can compare this to the getRGB() of a Color object if you need to. That should be enough to get you started.