Given an int from a DataBuffer which has ARGB data packed in it with the masks
A = 0xFF000000
R = 0xFF0000
G = 0xFF00
B = 0xFF
I'm doing the following but wonder if there isn't a faster method in Java?
DataBuffer db1 = img1.getData().getDataBuffer();
DataBuffer db2 = img2.getData().getDataBuffer();
int x, y;
int totalDiff = 0;
for (int i = 0; i < WIDTH * HEIGHT; ++i) {
x = db1.getElem(i);
y = db2.getElem(i);
totalDiff += Math.abs((x & 0xFF) - (y & 0xFF))
+ Math.abs(((x & 0xFF00) >> 8) - ((y & 0xFF00) >> 8))
+ Math.abs(((x & 0xFF0000) >> 16) - ((y & 0xFF0000) >> 16 ));
}
If you really need the speed up you might to check the type of DataBuffer and provide optimized code for the concrete type such that you save the calls to getElem(i). This will speed up your code a little bit.
Something like this:
DataBuffer db1 = img1.getData().getDataBuffer();
DataBuffer db2 = img2.getData().getDataBuffer();
int totalDiff = 0;
int x, y;
if (db1 instanceof DataBufferInt && db2 instanceof DataBufferInt) {
int[] data1 = ((DataBufferInt) db1).getData();
int[] data2 = ((DataBufferInt) db2).getData();
for (int i = 0; i < WIDTH * HEIGHT; ++i) {
x = data1[i];
y = data2[i];
totalDiff += Math.abs((x & 0xFF) - (y & 0xFF))
+ Math.abs(((x & 0xFF00) >> 8) - ((y & 0xFF00) >> 8))
+ Math.abs(((x & 0xFF0000) >> 16) - ((y & 0xFF0000) >> 16));
}
} else {
for (int i = 0; i < WIDTH * HEIGHT; ++i) {
x = db1.getElem(i);
y = db2.getElem(i);
totalDiff += Math.abs((x & 0xFF) - (y & 0xFF))
+ Math.abs(((x & 0xFF00) >> 8) - ((y & 0xFF00) >> 8))
+ Math.abs(((x & 0xFF0000) >> 16) - ((y & 0xFF0000) >> 16));
}
}
Edit:
Another idea that would bring you a MUCH higher speed up. If this is just a heuristic it might be enough to calculate the difference of a somewhat "downsampled" version of your images. Replace ++i through i+=10 and gain a speed up by factor 10. Of course if this makes sense depends on the types of your images.
Edit:
In one comment you mentioned it's a fitness function for a GA ... in this case it might be enough to grab 100 (or just 10?) random locations from your images and compare the pixels at that locations. The gained speed up will most probably outdo the loss in accuracy.
Agree with #Arne.
You could also remove the shift rights
(x & 0xFF0000) >> 16) - ((y & 0xFF0000) >> 16 ).
You know that abs(XX0000 - YY0000) is only going to be in the range 0-255.
It would help if you could suggest what it is you are trying to determine?
That is, can the pixel information be store more conducively to what you are trying to acheive, for example as chrominance (YUV, YCrCb)?
If calculating the sum of squares is acceptable for you, it is faster:
for (int i = 0; i < WIDTH * HEIGHT; ++i) {
x = db1.getElem(i);
y = db2.getElem(i);
int dr = ((x & 0xFF0000) >> 16) - ((y & 0xFF0000) >> 16 );
int dg = ((x & 0xFF00) >> 8) - ((y & 0xFF00) >> 8);
int db = (x & 0xFF) - (y & 0xFF);
totalDiff += dr*dr + dg*dg + db*db;
}
Related
I want to make a funktion, that takes every pixel and changes it a bit.
after it is changed, I have an array of all the pixels, i want to translate that array back to an image.
What would be the fastest way to do that?
byte[] argb = ((DataBufferByte) ii.getRaster().getDataBuffer()).getData();
int j = (argb.length / 4);
int[]intrb = new int[j];
for(int k = 0; k < j; k++) {
{
intrb[k] = (((int) argb[4*k]-0x99 & 0xff) << 24); // alpha
intrb[k] += ((int) argb[4*k+1] & 0xff); // blue
intrb[k] += (((int) argb[4*k+2] & 0xff) << 8); // green
intrb[k] += (((int) argb[4*k+3] & 0xff) << 16); // red
}
}
ii.setRGB(); // this is, where i would translate it back. But it has to be an ARGB version.
return ii;
I have a set of meteorological RGB type BufferedImages. I want to get average image of them. By that, I mean get average value of each pixel and make a new image out of those values. What I tried is this:
public void getWaveImage(BufferedImage input1, BufferedImage input2){
// images are of same size that's why i'll use first one's width and height
int width = input1.getWidth(), height = input1.getHeight();
BufferedImage output = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
int[] rgb1 = input1.getRGB(0, 0, width, height, new int[width * height], 0, width);
int[] rgb2 = input2.getRGB(0, 0, width, height, new int[width * height], 0, width);
for(int i=0; i<width; i++){
for(int j=0; j<height; j++){
int rgbIndex = i * width + j;
rgb1[rgbIndex] = (rgb1[rgbIndex] + rgb2[rgbIndex]) / 2;
}
}
output.setRGB(0, 0, width, height, rgb1, 0, width);
return output;
}
What am I doing wrong? Thank you in advance.
input1:
input2:
output:
You want the average of each component of the colour, average red, average green, average blue.
Instead you are averaging the whole int.
Color c1 = new Color(rgb1[rgbIndex]);
Color c2 = new Color(rgb2[rgbIndex]);
Color cA = new Color((c1.getRed() + c2.getRed())/2,
(c1.getGreen() + c2.getGreen())/2,
(c1.getBlue() + c2.getBlue())/2);
rgb1[rgbIndex] = cA.getRGB();
This may not be the most efficient due to creating so many objects, so a more direct approach is like so:
public static int average(int argb1, int argb2){
return (((argb1 & 0xFF) + (argb2 & 0xFF)) >> 1) | //b
(((argb1 >> 8 & 0xFF) + (argb2 >> 8 & 0xFF)) >> 1) << 8 | //g
(((argb1 >> 16 & 0xFF) + (argb2 >> 16 & 0xFF)) >> 1) << 16 | //r
(((argb1 >> 24 & 0xFF) + (argb2 >> 24 & 0xFF)) >> 1) << 24; //a
}
Usage:
rgb1[rgbIndex] = average(rgb1[rgbIndex], rgb2[rgbIndex]);
If you have:
int rgb1, rgb2; //the rgb value of a pixel in image 1 and 2 respectively
The "average" color would be:
int r = (r(rgb1) + r(rgb2)) / 2;
int g = (g(rgb1) + g(rgb2)) / 2;
int b = (b(rgb1) + b(rgb2)) / 2;
int rgb = ((r & 0xFF) << 16) | ((g & 0xFF) << 8) | ((b & 0xFF) << 0);
with the following "helper" methods:
private static int r(int rgb) { return (rgb >> 16) & 0xFF; }
private static int g(int rgb) { return (rgb >> 8) & 0xFF; }
private static int b(int rgb) { return (rgb >> 0) & 0xFF; }
Alternatively you can use the Color class if you don't want to deal with bitwise operations.
another solution can be to replace
rgb1[rgbIndex] = (rgb1[rgbIndex] + rgb2[rgbIndex]) / 2;
with
rgb1[rgbIndex] = ((rgb1[rgbIndex]>>1)&0x7f7f7f7f)+((rgb2[rgbIndex]>>1)&0x7f7f7f7f)+(rgb1[rgbIndex]&rgb2[rgbIndex]&0x01010101);
binary right shift to divide by 2, last member of the sum to handle the case of two odd numbers.
Given two colors A and B, I would like to get the resulting color C, that is the most possible realistic natural mix of the A and B.
Example :
Red + Yellow = Orange
Blue + Yellow = Green
Red + Blue = Purple
Blue + White = Light Blue
Blue + Black = Dark Blue
etc...
Can I get it with ARGB representation of the given colors?
We can call a function which returns result array when give two arrays as parameters. But Arrays should be same sizes.
public int getAvgARGB(int[] clr1, int[] clr2){
int[] returnArray = new int[clr1.length];
for(int i=0; i<clr1.length;i++){
int a1[i] = (clr1[i] & 0xFF000000) >>> 24;
int r1[i] = (clr1[i] & 0x00FF0000) >> 16;
int g1[i] = (clr1[i] & 0x0000FF00) >> 8;
int b1[i] = (clr1[i] & 0x000000FF) ;
int a2[i] = (clr2[i] & 0xFF000000) >>> 24;
int r2[i] = (clr2[i] & 0x00FF0000) >> 16;
int g2[i] = (clr2[i] & 0x0000FF00) >> 8;
int b2[i] = (clr2[i] & 0x000000FF) ;
int aAvg = (a1[i] + a2[i]) / 2;
int rAvg = (r1[i] + r2[i]) / 2;
int gAvg = (g1[i] + g2[i]) / 2;
int bAvg = (b1[i] + b2[i]) / 2;
int returnArray[i] = (aAvg << 24) + (rAvg << 16) + (gAvg << 8) + bAvg;
}
return returnArray;
}
Can anyone see what the issue is when I try to convert my 8 bit image into an 4 bit image?
I am testing using the 8 bit image found here: http://laurashoe.com/2011/08/09/8-versus-16-bit-what-does-it-really-mean/
You can tell how the 4 bit image should look like but mine is almost purely black.
// get color of the image and convert to grayscale
for(int x = 0; x <img.getWidth(); x++) {
for(int y = 0; y < img.getHeight(); y++) {
int rgb = img.getRGB(x, y);
int r = (rgb >> 16) & 0xF;
int g = (rgb >> 8) & 0xF;
int b = (rgb & 0xF);
int grayLevel = (int) (0.299*r+0.587*g+0.114*b);
int gray = (grayLevel << 16) + (grayLevel << 8) + grayLevel;
img.setRGB(x,y,gray);
}
}
You should use 0xFF not 0xF,as 0xF means only last four bits, wchich will tell you almost nothing about the color, since in RGB an color is 8 bit.
try if this work:
// get color of the image and convert to grayscale
for(int x = 0; x <img.getWidth(); x++) {
for(int y = 0; y < img.getHeight(); y++) {
int rgb = img.getRGB(x, y);
int r = (rgb >> 16) & 0xFF;
int g = (rgb >> 8) & 0xFF;
int b = (rgb & 0xFF);
int grayLevel = (int) (0.299*r+0.587*g+0.114*b);
int gray = (grayLevel << 16) + (grayLevel << 8) + grayLevel;
img.setRGB(x,y,gray);
}
}
Since the code has been edited out from the question, here it is with the confirmed solution from the comments:
// get color of the image and convert to grayscale
for(int x = 0; x <img.getWidth(); x++) {
for(int y = 0; y < img.getHeight(); y++) {
int rgb = img.getRGB(x, y);
// get the upper 4 bits from each color component
int r = (rgb >> 20) & 0xF;
int g = (rgb >> 12) & 0xF;
int b = (rgb >> 4) & 0xF;
int grayLevel = (int) (0.299*r+0.587*g+0.114*b);
// use grayLevel value as the upper 4 bits of each color component of the new color
int gray = (grayLevel << 20) + (grayLevel << 12) + (grayLevel << 4);
img.setRGB(x,y,gray);
}
}
Note that the resulting image only looks like 4-bit grayscale, but still uses int as the RGB value.
8 bit image values are in range [0, 255] because pow(2, 8) = 256
To get 4 bit image values which will be in range [0, 15] as pow(2, 4) = 16,
we need to divide each pixel value by 16 -> range [0, 255] / 16 = range [0, 15].
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread("crowd.jpeg")
#Convert the image to grayscale
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray_img, cmap='gray')
Grayscale image
bit_4 = np.divide(gray_img, 16).astype('uint8')
plt.imshow(bit_4, cmap='gray')
Bit4 image
public float calculateDifference(BufferedImage b1, BufferedImage b2){
float error = 0;
for(int y = 0; y < sizeY; y++){
for(int x = 0; x < sizeX; x++){
Color c1 = new Color(b1.getRGB(x, y));
Color c2 = new Color(b2.getRGB(x, y));
error += Math.abs(c1.getRed() - c2.getRed());
error += Math.abs(c1.getGreen() - c2.getGreen());
error += Math.abs(c1.getBlue() - c2.getBlue());
error += Math.abs(c1.getAlpha() - c2.getAlpha());
}
}
return error;
}
I have this function that compares two bufferedimages. It returns a higher error if the two images are more different. The only problem is it runs really slowly so is there any more efficient way to do this? Any way to lower the runtime would really help.
Yes you can optimize the internal for loop.
Don't create new Color objects. Use directly the int value of RGB. This will limit the number of Objects created and the frequency of call to Garbage collection.
int color1 = b1.getRGB(x, y);
int alpha1 = (color1 >> 24) & 0xFF;
int red1 = (color1 >> 16) & 0xFF;
int green1 = (color1 >> 8) & 0xFF;
int blue1 = (color1 >> 0) & 0xFF;
int color2 = b2.getRGB(x, y);
int alpha2 = (color2 >> 24) & 0xFF;
int red2 = (color2 >> 16) & 0xFF;
int green2 = (color2 >> 8) & 0xFF;
int blue2 = (color2 >> 0) & 0xFF;
error += Math.abs(red1 - red2);
error += Math.abs(green1 - green2);
error += Math.abs(blue1 - blue2);
error += Math.abs(alpha1 - alpha2);