I want to convert an image to a gray scale image where pixel intensities are between 0-255.
I was able to convert images to a gray scale images with the following Java method.
public void ConvertToGrayScale(BufferedImage bufImage, int ImgWidth, int ImgHeight) {
for (int w = 0; w < ImgWidth; w++) {
for (int h = 0; h < ImgHeight; h++) {
Color color = new Color(bufImage.getRGB(w, h));
int ColAvgVal = ((color.getRed() + color.getGreen() + color.getBlue()) / 3);
Color avg = new Color(ColAvgVal, ColAvgVal, ColAvgVal);
bufImage.setRGB(w, h, avg.getRGB());
System.out.println(avg.getRGB());
}
}
}
"System.out.println(avg.getRGB());" is used to see the pixel intensities but the all the grey levels are minus values and not between 0-255.
Am I doing it wrong ? How would I convert an image to a gray scale image where pixel intensities are between 0-255.
Thanks
color.getRGB() does not return a value from 0..255, it returns an integer composited of your red, green and blue values, including the Alpha value. Presumably, this alpha value is 0xFF, which makes any combined color end up as 0xFFrrggbb, or, as you got, a huge negative number when written in decimals.
To see the "gray" level assigned, just check ColAvgVal.
Note that a better formula to convert between RGB and grayscale is to use the PAL/NTSC conversion:
gray = 0.299 * red + 0.587 * green + 0.114 * blue
because "full blue" should be darker in grayscale than "full red" and "full green".
Note: if you use this formula directly, watch out for floating point rounding errors. In theory, it should not return a value outside of 0..255 for gray; in practice, it will. So test and clamp the result.
Another option which does not require testing-and-clamping per pixel, is to use an integer-only version:
gray = (299 * red + 587 * green + 114 * blue)/1000;
which should work with only a very small rounding error.
You can check this . I hope it can help you.
You can check some differents methods like:
// The average grayscale method
private static BufferedImage avg(BufferedImage original) {
int alpha, red, green, blue;
int newPixel;
BufferedImage avg_gray = new BufferedImage(original.getWidth(), original.getHeight(), original.getType());
int[] avgLUT = new int[766];
for(int i=0; i<avgLUT.length; i++)
avgLUT[i] = (int) (i / 3);
for(int i=0; i<original.getWidth(); i++) {
for(int j=0; j<original.getHeight(); j++) {
// Get pixels by R, G, B
alpha = new Color(original.getRGB(i, j)).getAlpha();
red = new Color(original.getRGB(i, j)).getRed();
green = new Color(original.getRGB(i, j)).getGreen();
blue = new Color(original.getRGB(i, j)).getBlue();
newPixel = red + green + blue;
newPixel = avgLUT[newPixel];
// Return back to original format
newPixel = colorToRGB(alpha, newPixel, newPixel, newPixel);
// Write pixels into image
avg_gray.setRGB(i, j, newPixel);
}
}
return avg_gray;
}
Related
Im looking to add colour to a greyscale image; it doesnt have to be an accurate colour representation but adding a colour to a different shade of grey, this is just to identify different areas of interest within an image. E.g. areas of vegetation are likely to be of a similar shade of grey, by adding a colour to this range of values it should be clear which areas are vegetation, which are of water, etc.
I have the code for getting colours from an image and storing them as a colour object but this doesnt seem to give a way to modify the values.
E.g. if the grey vale is less than 85, colour red, if between 86 and 170 colour green and between 171 and 255 colour blue. I have no idea how this will look but in theory the resulting image should allow a user to identify the different areas.
The current code I have for getting pixel value is below.
int total_pixels = (h * w);
Color[] colors = new Color[total_pixels];
for (int x = 0; x < w; x++)
{
for (int y = 0; y < h; y++)
{
colors[i] = new Color(image.getRGB(x, y));
i++;
}
}
for (int i = 0; i < total_pixels; i++)
{
Color c = colors[i];
int r = c.getRed();
int g = c.getGreen();
int b = c.getBlue();
System.out.println("Red " + r + " | Green " + g + " | Blue " + b);
}
I appreciate any help at all! Thanks a lot
You are going to have to choose your own method of converting colours from the greyscale scheme to whatever colour you want.
In the example you've given, you could do something like this.
public Color newColorFor(int pixel) {
Color c = colors[pixel];
int r = c.getRed(); // Since this is grey, the G and B values should be the same
if (r < 86) {
return new Color(r * 3, 0, 0); // return a red
} else if (r < 172) {
return new Color(0, (r - 86) * 3, 0); // return a green
} else {
return new Color(0, 0, (r - 172) * 3); // return a blue
}
}
You may have to play around a bit to get the best algorithm. I suspect that the code above will actually make your image look quite dark and dingy. You'd be better off with lighter colours. For example, you might change every 0 in the code above to 255, which will give you shades of yellow, magenta and cyan. This is going to be a lot of trial and error.
I recommend you to take a look at Java2D. It has many classes that can make your life much easier. You may end up reinventing the wheel if you ignore it.
Here is a short showcase of what you can do:
int width = 100;
int height = 100;
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
image.getRGB(x, y);
Graphics2D g2d = (Graphics2D)image.getGraphics();
g2d.setColor(Color.GREEN);
g2d.fillRect(x, y, width, height);
Ok, I am using Processing which allows me to access pixels of any image as int[]. What I now want to do is to convert the image to gray-scale. Each pixel has a structure as shown below:
...........PIXEL............
[red | green | blue | alpha]
<-8--><--8---><--8--><--8-->
Now, what transformations do I need to apply to individual RGB values to make the image gray-scale ??
What I mean is, how much do I add / subtract to make the image gray-scale ?
Update
I found a few methods here: http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
For each pixel, the value for the red, green and blue channels should be their averages. Like this:
int red = pixel.R;
int green = pixel.G;
int blue = pixel.B;
pixel.R = pixel.G = pixel.B = (red + green + blue) / 3;
Since in your case the pixel colors seem to be stored in an array rather than in properties, your code could end up looking like:
int red = pixel[0];
int green = pixel[1];
int blue = pixel[2];
pixel[0] = pixel[1] = pixel[2] = (red + green + blue) / 3;
The general idea is that when you have a gray scale image, each pixel's color measures only the intensity of light at that point - and the way we perceive that is the average of the intensity for each color channel.
The following code loads an image and cycle through its pixels, changing the saturation to zero and keeping the same hue and brightness values.
PImage img;
void setup () {
colorMode(HSB, 100);
img = loadImage ("img.png");
size(img.width,img.height);
color sat = color (0,0,0);
img.loadPixels();
for (int i = 0; i < width * height; i++) {
img.pixels[i]=color (hue(img.pixels[i]), sat, brightness(img.pixels[i]));
}
img.updatePixels();
image(img,0,0);
}
How to isolate red/green/blue channel in BufferedImage: I have following code that does NOT work:`
public static BufferedImage isolateChannel(BufferedImage image,
EIsolateChannel channel)
{
BufferedImage result=new BufferedImage(image.getWidth(),
image.getHeight(),
image.getType());
int iAlpha=0;
int iRed=0;
int iGreen=0;
int iBlue=0;
int newPixel=0;
for(int i=0; i<image.getWidth(); i++)
{
for(int j=0; j<image.getHeight(); j++)
{
iAlpha=new Color(image.getRGB(i, j)).getAlpha();
iRed=new Color(image.getRGB(i, j)).getRed();
iGreen=new Color(image.getRGB(i, j)).getGreen();
iBlue=new Color(image.getRGB(i, j)).getBlue();
if(channel.equals(EIsolateChannel.ISOLATE_RED_CHANNEL))
{
newPixel=iRed;
}
if(channel.equals(EIsolateChannel.ISOLATE_GREEN_CHANNEL))
{
newPixel=iGreen;
}
if(channel.equals(EIsolateChannel.ISOLATE_BLUE_CHANNEL))
{
newPixel=iBlue;
}
result.setRGB(i,
j,
newPixel);
}
}
return result;
}`
By isolating channel I mean that if red channel is selected for isolation, for example, that only red component of picture is shown!
Color in java is defined in a packed integer,that is in a 32 bit integer the first 8 bits are alpha, next 8 are red, next 8 are green and last 8 are blue.
Suppose the following is an 32 bit integer representing a color.Then,
AAAAAAAA RRRRRRRR GGGGGGGG BBBBBBBB
^Alpha ^Red ^Green ^Blue
That is, each of alpha, red, green and blue are basically 8 bits with values from 0 to 255 (the color range). So when you would want to combine these individual components back into the 32 bit integer color you should write
color=alpha<<24 | red<<16 | green<<8 | blue
So as per the rules change the code to the following
if(channel.equals(EIsolateChannel.ISOLATE_RED_CHANNEL))
{
newPixel = newPixel | iRed<<16;
//Can also write newPixel=iRed , since newPixel is always 0 before this
}
if(channel.equals(EIsolateChannel.ISOLATE_GREEN_CHANNEL))
{
newPixel = newPixel | iGreen<<8;
}
if(channel.equals(EIsolateChannel.ISOLATE_BLUE_CHANNEL))
{
newPixel = newPixel | iBlue;
}
Note : I have ORed the newPixel before each component to allow display of multiple channels simultaneously, i.e you could display red and green with blue turned off.
UPDATE
The second error you are getting is due to the fact that you are not resetting the value of newPixel after each iteration. So to fix it add the line newPixel=0 within the loop.
Your code should be
newPixel=0; //Add this line
iAlpha=new Color(img.getRGB(x, y)).getAlpha();
iRed=new Color(img.getRGB(x, y)).getRed();
iGreen=new Color(img.getRGB(x, y)).getGreen();
iBlue=new Color(img.getRGB(x, y)).getBlue();
For added efficiency I would suggest using bitshifts for obtaining the red, green, blue, and the alpha.
int rgb = img.getRGB(x,y);
iAlpha = rgb>>24 & 0xff;
iRed = rgb >>16 & 0xff;
iGreen = rgb>>8 & 0xff;
iBlue = rgb & 0xff;
This code would run faster as it does not creates 4 Color objects for each pixel in the source image
Try this:
int width = bufferedImage.getWidth(), height = bufferedImage.getHeight();
Object dataElements = null;
Raster raster = bufferedImage.getRaster();
ColorModel colorModel = bufferedImage.getColorModel();
int red, blue, green, alpha;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
dataElements = raster.getDataElements(x, y, dataElements);
// extract colors
red = colorModel.getRed(dataElements);
blue = colorModel.getBlue(dataElements);
green = colorModel.getGreen(dataElements);
alpha = colorModel.getAlpha(dataElements);
}
}
I have been trying out some basic image processing using java (after a long gap).
Any operation I do on the original image and save it as a new image -> the o/p image always appears DULL (may be an issue with opacity or transparency).
I am pasting the function which I am using to do this job below :
//Returns a blurred java buffered image
public static BufferedImage blurImage(BufferedImage image)
{
int w = image.getWidth();
int h = image.getHeight();
int alpha = 0;
int red, green, blue, newPix;
int pix[] = null;
BufferedImage newImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
for(int i=0,j=0; i<w; i++)
{
for(j=0; j<h; j++)
{
pix = getSurroundingPixels(image, i>0?i-1:0, i<w-1?i+1:w-1, j>0?j-1:0, j<h-1?j+1:h-1);
red = green = blue = 0;
for(int k=0; k<pix.length; k++)
{
red += (pix[k]>>16) & 0xFF;
green += (pix[k]>>8) & 0xFF;
blue += (pix[k]) & 0xFF;
}
alpha = (image.getRGB(i,j)>>24) & 0xFF;
red /= pix.length;
green /= pix.length;
blue /= pix.length;
newPix = ((alpha<<24) | (red<<16) | (green<<8) | blue);
newImage.setRGB(i,j, newPix);
}
}
return newImage;
}
I would appreciate someone helping me on this issue.
I have now replaced BufferedImage.INT_TYPE_ARGB with BufferedImage.INT_TYPE_RGB, after this, the [processed] image doesn't appear dull, and [it] looks normal. Can you please explain why does this happen?
TYPE_INT_ARGB has a DirectColorModel with alpha; TYPE_INT_RGB has a DirectColorModel without alpha. Your algorithm scales the RGB, but clones the A. At a guess, your test image is opaque, possibly a .jpg image, requiring BufferedImage.TYPE_INT_RGB. You may want to examine your image using this example that scales A or this example that illustrates the color conversion done by setRGB().
I know the java code for grayscale is this( 0.2126 * red + 0.7152 * green + 0.0722 * blue(
I was wondering if anyone knows how I can find more variety of coloring formulas, like if i want to make the picture old fashion way, more orange, make it brighter, or darker ... sharper and so on
int pixel = image.getRGB(j, i);
int red = (pixel) & 0xff;
int green = (pixel >> 8) & 0xff;
int blue = (pixel >> 16) & 0xff;
int newPixel = (int) (0.2126 * red + 0.7152 * green + 0.0722 * blue);
image1.setRGB(j, i, newPixel);
The old fashion way you mention is called "sepia" effect. Take a look at this question particularly this answer which points out to the following code snippet (note that I did not write this code, just helping out in finding answers to your question)
/**
*
* #param img Image to modify
* #param sepiaIntensity From 0-255, 30 produces nice results
* #throws Exception
*/
public static void applySepiaFilter(BufferedImage img, int
sepiaIntensity) throws Exception
{
// Play around with this. 20 works well and was recommended
// by another developer. 0 produces a grey image
int sepiaDepth = 20;
int w = img.getWidth();
int h = img.getHeight();
WritableRaster raster = img.getRaster();
// We need 3 integers (for R,G,B color values) per pixel.
int[] pixels = new int[w*h*3];
raster.getPixels(0, 0, w, h, pixels);
// Process 3 ints at a time for each pixel. Each pixel has 3 RGB
colors in array
for (int i=0;i<pixels.length; i+=3)
{
int r = pixels[i];
int g = pixels[i+1];
int b = pixels[i+2];
int gry = (r + g + b) / 3;
r = g = b = gry;
r = r + (sepiaDepth * 2);
g = g + sepiaDepth;
if (r>255) r=255;
if (g>255) g=255;
if (b>255) b=255;
// Darken blue color to increase sepia effect
b-= sepiaIntensity;
// normalize if out of bounds
if (b<0) b=0;
if (b>255) b=255;
pixels[i] = r;
pixels[i+1]= g;
pixels[i+2] = b;
}
raster.setPixels(0, 0, w, h, pixels);
}
I would just play with the numbers.
more orange,
more red and a little more green (red + green = yellow)
brighter
increase all the factors
darker
decrease all the factors
sharper
This is specific filter which compare surrounding pixels to find edges. It not just a matter of playing with the colours.
BTW: You should add capping of the values. i.e. Math.min(255, Math.max(0, value))
You can manipulate the proportion between the color channels in order to change the scene "atmosphere". The images below were created using the ColorChannel plug-in.
The algorithm source code is presented below. The method getAttribute() gets the parameters (red,gree,blue) passed by the user. The methods getIntComponent0, getIntComponent1 and getIntComponent2 get each color channel (red, gree and blue). The method setIntColor sets back the value of each channel.
#Override
public void process
(
MarvinImage imageIn,
MarvinImage imageOut,
MarvinAttributes attrOut,
MarvinImageMask mask,
boolean preview
) {
int vr = (Integer)getAttribute("red");
int vg = (Integer)getAttribute("green");
int vb = (Integer)getAttribute("blue");
double mr = 1+Math.abs((vr/100.0)*2.5);
double mg = 1+Math.abs((vg/100.0)*2.5);
double mb = 1+Math.abs((vb/100.0)*2.5);
mr = (vr > 0? mr : 1.0/mr);
mg = (vg > 0? mg : 1.0/mg);
mb = (vb > 0? mb : 1.0/mb);
int red,green,blue;
for(int y=0; y<imageIn.getHeight(); y++){
for(int x=0; x<imageIn.getWidth(); x++){
red = imageIn.getIntComponent0(x, y);
green = imageIn.getIntComponent1(x, y);
blue = imageIn.getIntComponent2(x, y);
red = (int)Math.min(red * mr, 255);
green = (int)Math.min(green * mg, 255);
blue = (int)Math.min(blue * mb, 255);
imageOut.setIntColor(x, y, 255, red, green, blue);
}
}
}