I want to make a funktion, that takes every pixel and changes it a bit.
after it is changed, I have an array of all the pixels, i want to translate that array back to an image.
What would be the fastest way to do that?
byte[] argb = ((DataBufferByte) ii.getRaster().getDataBuffer()).getData();
int j = (argb.length / 4);
int[]intrb = new int[j];
for(int k = 0; k < j; k++) {
{
intrb[k] = (((int) argb[4*k]-0x99 & 0xff) << 24); // alpha
intrb[k] += ((int) argb[4*k+1] & 0xff); // blue
intrb[k] += (((int) argb[4*k+2] & 0xff) << 8); // green
intrb[k] += (((int) argb[4*k+3] & 0xff) << 16); // red
}
}
ii.setRGB(); // this is, where i would translate it back. But it has to be an ARGB version.
return ii;
Related
I have a set of meteorological RGB type BufferedImages. I want to get average image of them. By that, I mean get average value of each pixel and make a new image out of those values. What I tried is this:
public void getWaveImage(BufferedImage input1, BufferedImage input2){
// images are of same size that's why i'll use first one's width and height
int width = input1.getWidth(), height = input1.getHeight();
BufferedImage output = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
int[] rgb1 = input1.getRGB(0, 0, width, height, new int[width * height], 0, width);
int[] rgb2 = input2.getRGB(0, 0, width, height, new int[width * height], 0, width);
for(int i=0; i<width; i++){
for(int j=0; j<height; j++){
int rgbIndex = i * width + j;
rgb1[rgbIndex] = (rgb1[rgbIndex] + rgb2[rgbIndex]) / 2;
}
}
output.setRGB(0, 0, width, height, rgb1, 0, width);
return output;
}
What am I doing wrong? Thank you in advance.
input1:
input2:
output:
You want the average of each component of the colour, average red, average green, average blue.
Instead you are averaging the whole int.
Color c1 = new Color(rgb1[rgbIndex]);
Color c2 = new Color(rgb2[rgbIndex]);
Color cA = new Color((c1.getRed() + c2.getRed())/2,
(c1.getGreen() + c2.getGreen())/2,
(c1.getBlue() + c2.getBlue())/2);
rgb1[rgbIndex] = cA.getRGB();
This may not be the most efficient due to creating so many objects, so a more direct approach is like so:
public static int average(int argb1, int argb2){
return (((argb1 & 0xFF) + (argb2 & 0xFF)) >> 1) | //b
(((argb1 >> 8 & 0xFF) + (argb2 >> 8 & 0xFF)) >> 1) << 8 | //g
(((argb1 >> 16 & 0xFF) + (argb2 >> 16 & 0xFF)) >> 1) << 16 | //r
(((argb1 >> 24 & 0xFF) + (argb2 >> 24 & 0xFF)) >> 1) << 24; //a
}
Usage:
rgb1[rgbIndex] = average(rgb1[rgbIndex], rgb2[rgbIndex]);
If you have:
int rgb1, rgb2; //the rgb value of a pixel in image 1 and 2 respectively
The "average" color would be:
int r = (r(rgb1) + r(rgb2)) / 2;
int g = (g(rgb1) + g(rgb2)) / 2;
int b = (b(rgb1) + b(rgb2)) / 2;
int rgb = ((r & 0xFF) << 16) | ((g & 0xFF) << 8) | ((b & 0xFF) << 0);
with the following "helper" methods:
private static int r(int rgb) { return (rgb >> 16) & 0xFF; }
private static int g(int rgb) { return (rgb >> 8) & 0xFF; }
private static int b(int rgb) { return (rgb >> 0) & 0xFF; }
Alternatively you can use the Color class if you don't want to deal with bitwise operations.
another solution can be to replace
rgb1[rgbIndex] = (rgb1[rgbIndex] + rgb2[rgbIndex]) / 2;
with
rgb1[rgbIndex] = ((rgb1[rgbIndex]>>1)&0x7f7f7f7f)+((rgb2[rgbIndex]>>1)&0x7f7f7f7f)+(rgb1[rgbIndex]&rgb2[rgbIndex]&0x01010101);
binary right shift to divide by 2, last member of the sum to handle the case of two odd numbers.
Can anyone see what the issue is when I try to convert my 8 bit image into an 4 bit image?
I am testing using the 8 bit image found here: http://laurashoe.com/2011/08/09/8-versus-16-bit-what-does-it-really-mean/
You can tell how the 4 bit image should look like but mine is almost purely black.
// get color of the image and convert to grayscale
for(int x = 0; x <img.getWidth(); x++) {
for(int y = 0; y < img.getHeight(); y++) {
int rgb = img.getRGB(x, y);
int r = (rgb >> 16) & 0xF;
int g = (rgb >> 8) & 0xF;
int b = (rgb & 0xF);
int grayLevel = (int) (0.299*r+0.587*g+0.114*b);
int gray = (grayLevel << 16) + (grayLevel << 8) + grayLevel;
img.setRGB(x,y,gray);
}
}
You should use 0xFF not 0xF,as 0xF means only last four bits, wchich will tell you almost nothing about the color, since in RGB an color is 8 bit.
try if this work:
// get color of the image and convert to grayscale
for(int x = 0; x <img.getWidth(); x++) {
for(int y = 0; y < img.getHeight(); y++) {
int rgb = img.getRGB(x, y);
int r = (rgb >> 16) & 0xFF;
int g = (rgb >> 8) & 0xFF;
int b = (rgb & 0xFF);
int grayLevel = (int) (0.299*r+0.587*g+0.114*b);
int gray = (grayLevel << 16) + (grayLevel << 8) + grayLevel;
img.setRGB(x,y,gray);
}
}
Since the code has been edited out from the question, here it is with the confirmed solution from the comments:
// get color of the image and convert to grayscale
for(int x = 0; x <img.getWidth(); x++) {
for(int y = 0; y < img.getHeight(); y++) {
int rgb = img.getRGB(x, y);
// get the upper 4 bits from each color component
int r = (rgb >> 20) & 0xF;
int g = (rgb >> 12) & 0xF;
int b = (rgb >> 4) & 0xF;
int grayLevel = (int) (0.299*r+0.587*g+0.114*b);
// use grayLevel value as the upper 4 bits of each color component of the new color
int gray = (grayLevel << 20) + (grayLevel << 12) + (grayLevel << 4);
img.setRGB(x,y,gray);
}
}
Note that the resulting image only looks like 4-bit grayscale, but still uses int as the RGB value.
8 bit image values are in range [0, 255] because pow(2, 8) = 256
To get 4 bit image values which will be in range [0, 15] as pow(2, 4) = 16,
we need to divide each pixel value by 16 -> range [0, 255] / 16 = range [0, 15].
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread("crowd.jpeg")
#Convert the image to grayscale
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(gray_img, cmap='gray')
Grayscale image
bit_4 = np.divide(gray_img, 16).astype('uint8')
plt.imshow(bit_4, cmap='gray')
Bit4 image
I need to toggle on/off RGB channels of an image, but I am stuck and my code is buggy.
Can you help me figure out how to do this the right way? This is my code:
The function channels is called when 1 of 3 checkboxes has changed its state and provides the arguments which are true == selected
public void channels(boolean red, boolean green, boolean blue) {
if (this.img != null) {// checks if the image is set
char r = 0xFF, g = 0xFF, b = 0xFF;
if (red == false) {
r = 0x00;
}
if (green == false) {
g = 0x00;
}
if (blue == false) {
b = 0x00;
}
BufferedImage tmp = new BufferedImage(
img.getWidth(),
img.getHeight(),
BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < img.getWidth(); i++) {
for (int j = 0; j < img.getHeight(); j++) {
int rgb = img.getRGB(i, j);
int red = (rgb >> 16) & r;
int green = (rgb >> 8) & g;
int blue = (rgb >> 0) & b;
int gbr = (red << 16) | (green << 8) | blue;// EDITED
tmp.setRGB(i, j, gbr);
}
}
img = tmp;
repaint();
} else {
//show error
}
}
Thank you for your help!
How about this optimized version, with a lot less bit shifting?
public void channels(boolean showRed, boolean showGreen, boolean showBlue) {
if (this.origImg!= null) {// checks if the image is set
int channelMask = 0xff << 24 | (showRed ? 0xff : 0) << 16 | (showGreen ? 0xff : 0) << 8 | (showBlue ? 0xff : 0);
BufferedImage tmp = new BufferedImage(origImg.getWidth(), origImg.getHeight(), BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < origImg.getWidth(); i++) {
for (int j = 0; j < origImg.getHeight(); j++) {
int rgb = origImg.getRGB(i, j);
tmp.setRGB(i, j, rgb & channelMask);
}
}
img = tmp;
repaint();
} else {
//show error
}
}
A faster approach yet, would probably be to use a channeled Raster, or at least a Raster configuration that allows band sub-sampling (see Raster.createChild(...) method, especially the last parameter).
LookupOp, as mentioned by #trashgod is also a good idea, and probably faster than the getRGB()/setRGB() approach.
It looks like you're shifting in the bits wrong. Shouldn't it be: int gbr = (red << 16) | (green << 8) | blue;? You basically want to shift back in the same order as how you shifted out to begin with.
Also, once you have cleared the corresponding colour, there's no way for you to get it back. You'll need to store a copy of the original image somewhere. When it's time to turn the channel back on, simply copy the original pixel from the original image back.
Assuming that you have the original image stored somewhere as origImg, I would modify your for loop so that if the channel is toggled on, copy from the original image.
for (int i = 0; i < img.getWidth(); i++) {
for (int j = 0; j < img.getHeight(); j++) {
int rgb = img.getRGB(i, j);
int origRGB = origImg.getRGB(i, j);
int redPixel = red ? (origRGB >> 16) & r : (rgb >> 16) & r;
int greenPixel = green ? (origRGB >> 8) & g : (rgb >> 8) & g;
int bluePixel = blue ? origRGB & b : rgb & b;
int gbr = (redPixel << 16) | (greenPixel << 8) | bluePixel;
tmp.setRGB(i, j, gbr);
}
}
I have used the method ImageIO.read(File file); to read a PNG image file. However, when I use the getRGB(int x, int y) method on it to extract the alpha it always returns 255 whether the pixel is transparent or not. How do I remedy this inconvenience?
When converting packed int colors to Color objects, you need to tell it if it should calculate the alpha value or not.
new Color(image.getRGB(x, y), true).getAlpha();
See Color(int, boolean) for more details
Just wanted to point out that using the method getRGB(x,y) is extremely inefficient. If you want to get the pixels of an image you could extract the colours from each individual pixel and then store the pixel in an int array. Credit also to mota for explaining why this is inefficient see his post . Example below:
/**===============================================================================================
* Method that extracts pixel data from an image
* #return a 2d array representing the pixels of the image
=================================================================================================*/
public static int[][] getImageData(BufferedImage img) {
int height = img.getHeight();
int width = img.getWidth();
final byte[] imgPixels = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
final boolean is_Alpha_Present = img.getAlphaRaster() != null;
int[][] imgArr = new int[height][width];
if (is_Alpha_Present) {
final int pixelLength = 4; //number of bytes used to represent a pixel if alpha value present
for (int pixel = 0, row = 0, col = 0; pixel < imgPixels.length; pixel = pixel + pixelLength) {
int argb = 0;
argb += (((int) imgPixels[pixel] & 0xff) << 24); //getting the alpha for the pixel
argb += ((int) imgPixels[pixel + 1] & 0xff); //getting the blue colour
argb += (((int) imgPixels[pixel + 2] & 0xff) << 8); //getting the green colour
argb += (((int) imgPixels[pixel + 3] & 0xff) << 16); //getting the red colour
imgArr[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < imgPixels.length; pixel = pixel + pixelLength) {
int argb = 0;
argb += Integer.MIN_VALUE;
argb += ((int) imgPixels[pixel] & 0xff); //getting the blue colour
argb += (((int) imgPixels[pixel+1] & 0xff) << 8); //getting the green colour
argb += (((int) imgPixels[pixel+2] & 0xff) << 16); //getting the red colour
imgArr[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return imgArr;
}
I have a BufferedImage which represents a 2048X2048 pixels tiff image. I wish to retrieve such array (int [2048][2048] from the BufferedImage. How should I proceed?
arr = new int[2048][2048];
for(int i = 0; i < 2048; i++)
for(int j = 0; j < 2048; j++)
arr[i][j] = image.getRGB(i, j);
Since you can get the RGB value of each pixel from the image data structure itself, it might be a benefit to NOT copy everything over to a 2d array.
This method will return the red, green and blue values directly for each pixel, and if there is an alpha channel it will add the alpha value. Using this method is harder in terms of calculating indices, but is much faster than the first approach.
private static int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}