I created 2D array of floats in java, representing gray scale image, when each pixel is normalized - it between [0,1].
How can I take the 2D array and display the image (in gray scale of course)?
ty!
The easiest way is to make a BufferedImage out of it. To do that, you'll have to convert the values into colors:
int toRGB(float value) {
int part = Math.round(value * 255);
return part * 0x10101;
}
That first converts the 0-1 range into 0-255 range, then produces a color where all three channels (RGB - red, green and blue) have the same value, which makes a gray.
Then, to make the whole image, set all the pixel values:
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
image.setRGB(x, y, toRGB(theFloats[y][x]));
Once you have the image, you can save it to a file:
ImageIO.save(image, 'png', new File('some/path/file.png'));
Or, display it in some way, perhaps with Swing.. See for example this question.
Related
I am attempting to histogram equalize a grayscale image in Java. The description is as follows: Iterate over the image using one band of each pixel's RGB as the index of the look-up table to determine the new pixel value for the image. Set the RGB for each pixel to the RGB corresponding to the new pixel value.
Implementing this I get an image that is tinted blue:
[removed]
(Expected result)
[removed]
Here is the code I have so far:
private void histogramEqualize(BufferedImage im, int[] lut) {
for (int x = 0; x < im.getWidth(); x++) {
for (int y = 0; y < im.getHeight(); y++) {
Color c = new Color(im.getRGB(x, y));
Color eq = new Color(lut[c.getRed()], c.getGreen(), c.getBlue());
im1.setRGB(x, y, eq.getRGB());
}
}
}
public int[] getLookupTable(int[] h, int n) {
// h: Histogram for im1 in either the red band or luminance.
lut = new int[256];
double sf = 255/n;
int sumH = 0;
int sk = 0;
for(int i=0; i<h.length; i++) {
sumH += h[i];
sk = (int)(sf*sumH);
lut[i] = sk;
}
return lut;
}
I also tried changing Color eq = new Color(lut[c.getRed()], c.getGreen(), c.getBlue()); to Color eq = new Color(lut[c.getRed()], lut[c.getGreen()], lut[c.getBlue()]); but this resulted in a black image.
You have mentioned that you want to apply histogram equalization on a gray scale image but you are using RGB color values of pixels.
For gray scale image you can normalize only gray scale levels in image for histogram equalization as below:
1) Iterate through each gray scale pixels values and generate a histogram data of each gray scale levels by counting their occurrences in the image.
2) Find the cumulative distribution of the above histogram.
3) Iterate through each gray scale pixel values in the original image and replace their values with the their corresponding normalized values using below formula.
where L=255, that is total gray scale levels,
M = Image height,
N = Image width,
MxN to get total number of pixels in image.
cdfmin = min value of cumulative distribution data in step 2.
This will get you the new normalized image matrix.
If you want to to apply histogram equalization on RGB image then you will need to to convert RGB color space to HSV color space and apply same steps as in gray scale image on value channels without changing their hue and saturation values.
I have to handle VERY large (1-2GB) Tiff files, and only need to do some RGB manipulations on pixels, where I only make local corrections (color of a modified pixel is only depending on its old values, but not on e.g. neighbor pixels).
Is their (JAVA) a way to read the file as some kind of pixel stream, make adjustments on the RGB values, and write the stuff immediately to another file? I will not have enough memory to store the entire file in RAM (or at least I hope I could avoid it)
Thx for any hints...
THX
-Marco
Well, I don't actually know what a tiff file is 😅, but if it is a file, which you can store in a BufferedImage, it should be relatively easy.
I would do something like:
public BufferedImage correctRGB()
{
BufferedImage b = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
//width and height are the width and height of the original image
Graphics g = b.getGraphics();
for(int x = 0; x < b.getHeight(); x++)
{
for(int y = 0; y < b.getWidth(); y++)
{
//loop through all the pixels in the image ONCE to spare RAM
int pixels = b.getRGB(x, y);
int alpha = (pixels >> 24) &0xff;
int red = (pixels >> 16) &0xff;
int green = (pixels >> 8) &0xff;
int blue = pixels &0xff;
//in here you play around with the values
g.setColor(new Color(red, green, blue, alpha));
g.fillRect(x, y, 1, 1);
}
}
g.dispose();
return b;
}
you can basically do everything you want with the argb values now.
For example, you could turn the whole image negative by doing red = 255 - red and so on.
or turn the whole image into grayscale by doing
int average = (red + green + blue) / 3;
g.setColor(new Color(average, average, average, alpha));
Ok, I am using Processing which allows me to access pixels of any image as int[]. What I now want to do is to convert the image to gray-scale. Each pixel has a structure as shown below:
...........PIXEL............
[red | green | blue | alpha]
<-8--><--8---><--8--><--8-->
Now, what transformations do I need to apply to individual RGB values to make the image gray-scale ??
What I mean is, how much do I add / subtract to make the image gray-scale ?
Update
I found a few methods here: http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
For each pixel, the value for the red, green and blue channels should be their averages. Like this:
int red = pixel.R;
int green = pixel.G;
int blue = pixel.B;
pixel.R = pixel.G = pixel.B = (red + green + blue) / 3;
Since in your case the pixel colors seem to be stored in an array rather than in properties, your code could end up looking like:
int red = pixel[0];
int green = pixel[1];
int blue = pixel[2];
pixel[0] = pixel[1] = pixel[2] = (red + green + blue) / 3;
The general idea is that when you have a gray scale image, each pixel's color measures only the intensity of light at that point - and the way we perceive that is the average of the intensity for each color channel.
The following code loads an image and cycle through its pixels, changing the saturation to zero and keeping the same hue and brightness values.
PImage img;
void setup () {
colorMode(HSB, 100);
img = loadImage ("img.png");
size(img.width,img.height);
color sat = color (0,0,0);
img.loadPixels();
for (int i = 0; i < width * height; i++) {
img.pixels[i]=color (hue(img.pixels[i]), sat, brightness(img.pixels[i]));
}
img.updatePixels();
image(img,0,0);
}
I'm using a flood fill algorithm to sort through an image. If it encounters the same color, I want it copy that pixel over into an identically sized array called filled. The array filled is then transformed back into an image and saved as a jpg. However, when I open the jpg, it appears entirely black.
public static void findFace(int[][] image) throws IOException {
int height = image.length;
int width = image[0].length;
Color centerStart = new Color(image[width / 2][height / 2]);
int[][] filled = new int[width][height];
floodFill(width / 2, height / 2, centerStart, image, filled);
//construct the filled array as image. Show if the face was found.
BufferedImage bufferImage2 = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int Pixel = filled[x][y] << 16 | filled[x][y] << 8 | filled[x][y];
bufferImage2.setRGB(x, y, Pixel);
}
}
//save filled array as image file
File outputfile = new File("/home/lily/Pictures/APicaDay/saved.jpg");
ImageIO.write(bufferImage2, "jpg", outputfile);
}
public static int[][] floodFill(int x, int y, Color targetColor, int[][] image, int[][] filled) {
if (image[x][y] != targetColor.getRGB()) {
return filled;
}
filled[x][y] = image[x][y];
floodFill(x - 1, y, targetColor, image, filled);
floodFill(x + 1, y, targetColor, image, filled);
floodFill(x, y - 1, targetColor, image, filled);
floodFill(x, y + 1, targetColor, image, filled);
return filled;
}
bonus question: I would like the flood fill to also accept colors that are similar, but not the exact same, since I'm dealing with a photograph.
The floodFill function you've posted is missing two important elements:
If the area containing the same color as the first pixel extends all the way to the boundary of the image, the function will try to access image at an invalid index. You can fix this by first checking the x and y coordinates of the pixel you are checking, and returning immediately if they are out of bounds.
If there is more than one adjacent pixel of the same color, the function will cause recurse infinitely, since the initial call will call floodFill on the second pixel, which will then proceed to call floodFill on the first pixel, and so on. You need a way to make sure that you only call floodFill on a particular pixel once.
Since you're not observing either of these two symptoms, and you don't observe anything from the resulting image, I guess that the initial pixel's color check is not correct. When you pass an integer to the Color constructor, are you sure that it uses an RBG interpretation of that integer?
I have two dimensional matrix which stores values between 0 to 1. I want to plot these values as levels of gray scale.
If the value is 1, it should be drawn as white.
If the value is 0, it should be drawn as black.
How would I do that in java?
I tried the classes: Color and BufferedImage, but I could not figure it out.
To create an image and set the pixels:
final BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
image.setRGB(x, y, color);
}
}
color is an int, in this case, in ARGB format (top byte is alpha, then red byte, green byte, blue byte). Since you're doing greyscale, you want R, G and B to be the same value. You don't want alpha, so you should set that top byte to 0xFF.
See: BufferedImage.setRGB()