how to quantize the image to MxN blocks - java

Certainly, I am writing a program about Image to ASCII Conversion. I have already convert the image to grayscale, but I don't know to write the code of quantizing the image to MxN blocks
For every sub-image of MxN size, compute its average gray value
Store the computed average gray value to a new image).
Here is the program:
public static char[][] imageToASCII(Image img, int blockWidth, int blockHeight)
{
{
// Convert image from type Image to BufferedImage
BufferedImage bufImg = convert(img);
// Scan through each row of the image
for(int j=0; j<bufImg.getHeight(); j++)
{
// Scan through each columns of the image
for(int i=0; i<bufImg.getWidth(); i++)
{
// Returns an integer pixel in the default RGB color model
int values=bufImg.getRGB(i,j);
// Convert the single integer pixel value to RGB color
Color oldColor = new Color(values);
int red = oldColor.getRed(); // get red value
int green = oldColor.getGreen(); // get green value
int blue = oldColor.getBlue(); // get blue value
// Convert RGB to grayscale using formula
// gray = 0.299 * R + 0.587 * G + 0.114 * B
double grayVal = 0.299*red + 0.587*green + 0.114*blue;
// Assign each channel of RGB with the same value
Color newColor = new Color((int)grayVal, (int)grayVal, (int)grayVal);
// Get back the integer representation of RGB color
// and assign it back to the original position
bufImg.setRGB(i, j, newColor.getRGB());
}
return null;

Maybe you just need to divide the image into blocks of MxN size. This is similar to the Microblock in video coding.

Related

display grayscale image in java

I created 2D array of floats in java, representing gray scale image, when each pixel is normalized - it between [0,1].
How can I take the 2D array and display the image (in gray scale of course)?
ty!
The easiest way is to make a BufferedImage out of it. To do that, you'll have to convert the values into colors:
int toRGB(float value) {
int part = Math.round(value * 255);
return part * 0x10101;
}
That first converts the 0-1 range into 0-255 range, then produces a color where all three channels (RGB - red, green and blue) have the same value, which makes a gray.
Then, to make the whole image, set all the pixel values:
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
image.setRGB(x, y, toRGB(theFloats[y][x]));
Once you have the image, you can save it to a file:
ImageIO.save(image, 'png', new File('some/path/file.png'));
Or, display it in some way, perhaps with Swing.. See for example this question.

Histogram Equalization of image using lookup table

I am attempting to histogram equalize a grayscale image in Java. The description is as follows: Iterate over the image using one band of each pixel's RGB as the index of the look-up table to determine the new pixel value for the image. Set the RGB for each pixel to the RGB corresponding to the new pixel value.
Implementing this I get an image that is tinted blue:
[removed]
(Expected result)
[removed]
Here is the code I have so far:
private void histogramEqualize(BufferedImage im, int[] lut) {
for (int x = 0; x < im.getWidth(); x++) {
for (int y = 0; y < im.getHeight(); y++) {
Color c = new Color(im.getRGB(x, y));
Color eq = new Color(lut[c.getRed()], c.getGreen(), c.getBlue());
im1.setRGB(x, y, eq.getRGB());
}
}
}
public int[] getLookupTable(int[] h, int n) {
// h: Histogram for im1 in either the red band or luminance.
lut = new int[256];
double sf = 255/n;
int sumH = 0;
int sk = 0;
for(int i=0; i<h.length; i++) {
sumH += h[i];
sk = (int)(sf*sumH);
lut[i] = sk;
}
return lut;
}
I also tried changing Color eq = new Color(lut[c.getRed()], c.getGreen(), c.getBlue()); to Color eq = new Color(lut[c.getRed()], lut[c.getGreen()], lut[c.getBlue()]); but this resulted in a black image.
You have mentioned that you want to apply histogram equalization on a gray scale image but you are using RGB color values of pixels.
For gray scale image you can normalize only gray scale levels in image for histogram equalization as below:
1) Iterate through each gray scale pixels values and generate a histogram data of each gray scale levels by counting their occurrences in the image.
2) Find the cumulative distribution of the above histogram.
3) Iterate through each gray scale pixel values in the original image and replace their values with the their corresponding normalized values using below formula.
where L=255, that is total gray scale levels,
M = Image height,
N = Image width,
MxN to get total number of pixels in image.
cdfmin = min value of cumulative distribution data in step 2.
This will get you the new normalized image matrix.
If you want to to apply histogram equalization on RGB image then you will need to to convert RGB color space to HSV color space and apply same steps as in gray scale image on value channels without changing their hue and saturation values.

How to convert an image to a 0-255 grey scale image

I want to convert an image to a gray scale image where pixel intensities are between 0-255.
I was able to convert images to a gray scale images with the following Java method.
public void ConvertToGrayScale(BufferedImage bufImage, int ImgWidth, int ImgHeight) {
for (int w = 0; w < ImgWidth; w++) {
for (int h = 0; h < ImgHeight; h++) {
Color color = new Color(bufImage.getRGB(w, h));
int ColAvgVal = ((color.getRed() + color.getGreen() + color.getBlue()) / 3);
Color avg = new Color(ColAvgVal, ColAvgVal, ColAvgVal);
bufImage.setRGB(w, h, avg.getRGB());
System.out.println(avg.getRGB());
}
}
}
"System.out.println(avg.getRGB());" is used to see the pixel intensities but the all the grey levels are minus values and not between 0-255.
Am I doing it wrong ? How would I convert an image to a gray scale image where pixel intensities are between 0-255.
Thanks
color.getRGB() does not return a value from 0..255, it returns an integer composited of your red, green and blue values, including the Alpha value. Presumably, this alpha value is 0xFF, which makes any combined color end up as 0xFFrrggbb, or, as you got, a huge negative number when written in decimals.
To see the "gray" level assigned, just check ColAvgVal.
Note that a better formula to convert between RGB and grayscale is to use the PAL/NTSC conversion:
gray = 0.299 * red + 0.587 * green + 0.114 * blue
because "full blue" should be darker in grayscale than "full red" and "full green".
Note: if you use this formula directly, watch out for floating point rounding errors. In theory, it should not return a value outside of 0..255 for gray; in practice, it will. So test and clamp the result.
Another option which does not require testing-and-clamping per pixel, is to use an integer-only version:
gray = (299 * red + 587 * green + 114 * blue)/1000;
which should work with only a very small rounding error.
You can check this . I hope it can help you.
You can check some differents methods like:
// The average grayscale method
private static BufferedImage avg(BufferedImage original) {
int alpha, red, green, blue;
int newPixel;
BufferedImage avg_gray = new BufferedImage(original.getWidth(), original.getHeight(), original.getType());
int[] avgLUT = new int[766];
for(int i=0; i<avgLUT.length; i++)
avgLUT[i] = (int) (i / 3);
for(int i=0; i<original.getWidth(); i++) {
for(int j=0; j<original.getHeight(); j++) {
// Get pixels by R, G, B
alpha = new Color(original.getRGB(i, j)).getAlpha();
red = new Color(original.getRGB(i, j)).getRed();
green = new Color(original.getRGB(i, j)).getGreen();
blue = new Color(original.getRGB(i, j)).getBlue();
newPixel = red + green + blue;
newPixel = avgLUT[newPixel];
// Return back to original format
newPixel = colorToRGB(alpha, newPixel, newPixel, newPixel);
// Write pixels into image
avg_gray.setRGB(i, j, newPixel);
}
}
return avg_gray;
}

Image analysis function to calculate middle gray level (max(z)+min(z)/2 in Java

How do I calculate the middle gray level (max(z)+min(z)/2 over the points where the structuring element is 1 and sets the output pixel to that value?
I just know a little about how to get the RGB value each pixel by using image.getRGB(x,y). I have no idea how to get gray level value each pixel of the image and what is z in the formula and all that?
Please help me with this. Thanks in advance.
I'm going to assume that z are the pixels within your structuring element. I'm also going to assume that "structuring element" is in the case of morphology. Here are a few pointers before we start:
You can convert a colour pixel to its graylevel intensity by using the Luminance formula. By consulting the SMPTE Rec. 709 standard, the output graylevel intensity, given the RGB components is: Y = 0.2126*R + 0.7152*G + 0.0722*B.
We're going to assume that the structuring element is odd. This will allow for the symmetric analysis of the structuring element for each pixel in your image where it is placed
I'm going to assume that your image is already loaded in as a BufferedImage.
Your structuring element will be a 2D array of int.
I'm not going to process those pixels where the structuring element traverses out of bounds to make things easy.
As such, the basic algorithm is this:
For each pixel in our image, place the centre of the structuring element at this location
For each pixel location where the structuring element is 1 that coincides with this position, find the max and minimum graylevel intensity
Set the output image pixel at this location to be (max(z) + min(z)) / 2).
Without further ado:
public BufferedImage calculateMiddleGray(BufferedImage img, int[][] mask)
{
// Declare output image
BufferedImage outImg = new BufferedImage(img.getWidth(),
img.getHeight(), BufferedImage.TYPE_INT_RGB);
// For each pixel in our image...
for (int i = mask.length/2; i < img.getWidth() - mask.length/2; i++) {
for (int j = mask[0].length/2; j < img.getHeight() - mask[0].length/2; j++) {
int maxPix = -1;
int minPix = 256;
// For each pixel in the mask...
for (int x = -mask.length/2; x <= mask.length/2; x++) {
for (int y = -mask[0].length/2; y <= mask[0].length/2; y++) {
//Obtain structuring element pixel
int structPix = mask[y+mask.length/2][x+mask[0].length/2];
// If not 1, continue
if (structPix != 1)
continue;
// Get RGB pixel
int rgb = img.getRGB(i+x, j+y);
// Get red, green and blue channels individually
int redPixel = (rgb >> 16) & 0xFF;
int greenPixel = (rgb >> 8) & 0xFF;
int bluePixel = rgb & 0xFF;
// Convert to grayscale
// Performs SMPTE Rec. 709 lum. conversion using integer logic
int lum = (77*red + 150*green + 29*blue) >> 8;
// Find max and min appropriately
if (lum > maxPix)
maxPix = lum;
if (lum < minPix)
minPix = lum;
}
}
// Set output pixel
// Grayscale image has all of its RGB pixels equal
int outPixel = (maxPix + minPix) / 2;
// Cap output - Ensure we don't go out of bounds
if (outPixel > 255)
outPixel = 255;
if (outPixel < 0)
outPixel = 0;
int finalOut = (outPixel << 16) | (outPixel << 8) | outPixel;
outImg.setRGB(i, j, finalOut);
}
}
}
To call this method, create an image img using any standard method, then create a structuring element mask that is a 2D integer array. After, place this method in your class, then invoke the method by:
BufferedImage outImg = calculateMiddleGray(img, mask);
Also (and of course), make sure you import the necessary package for the BufferedImage class, or:
import java.awt.image.BufferedImage;
Note: This is untested code. Hope it works!

How to convert pixels to gray scale?

Ok, I am using Processing which allows me to access pixels of any image as int[]. What I now want to do is to convert the image to gray-scale. Each pixel has a structure as shown below:
...........PIXEL............
[red | green | blue | alpha]
<-8--><--8---><--8--><--8-->
Now, what transformations do I need to apply to individual RGB values to make the image gray-scale ??
What I mean is, how much do I add / subtract to make the image gray-scale ?
Update
I found a few methods here: http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
For each pixel, the value for the red, green and blue channels should be their averages. Like this:
int red = pixel.R;
int green = pixel.G;
int blue = pixel.B;
pixel.R = pixel.G = pixel.B = (red + green + blue) / 3;
Since in your case the pixel colors seem to be stored in an array rather than in properties, your code could end up looking like:
int red = pixel[0];
int green = pixel[1];
int blue = pixel[2];
pixel[0] = pixel[1] = pixel[2] = (red + green + blue) / 3;
The general idea is that when you have a gray scale image, each pixel's color measures only the intensity of light at that point - and the way we perceive that is the average of the intensity for each color channel.
The following code loads an image and cycle through its pixels, changing the saturation to zero and keeping the same hue and brightness values.
PImage img;
void setup () {
colorMode(HSB, 100);
img = loadImage ("img.png");
size(img.width,img.height);
color sat = color (0,0,0);
img.loadPixels();
for (int i = 0; i < width * height; i++) {
img.pixels[i]=color (hue(img.pixels[i]), sat, brightness(img.pixels[i]));
}
img.updatePixels();
image(img,0,0);
}

Categories

Resources