Image mean threshold in java - java

I would like to find the each pixel of image, and then I will get the total of pixel value, then I will find the mean value. I compare the each pixel value with the mean I get, if it is >255, pixel value will become 1 (represent black colour), if <255 then will become 0 (represent white colour). After that I set the new RGB colour, and draw the output image. Input
Based on my concept, I thought the output image will be black and white image, but it just show black in colour. Output
public class Imej {
public void mapping(BufferedImage image) throws IOException {
BufferedImage binary = new BufferedImage(image.getWidth(),
image.getHeight(), BufferedImage.TYPE_BYTE_BINARY);
int i, j;
int w = image.getWidth();
int h = image.getHeight();
image.setRGB(i, j, new Color(pixel[i][j]).getRGB());
ImageIO.write(binary,"png",output);
}
}
This is readimage.
public void readimage() {
BufferedImage image = null;
File f = null;
try {
image = ImageIO.read(new File(/** path **/));
//System.out.println(image);
mapping(image);
} catch (Exception e) {
e.printStackTrace();
System.exit(1);
}
}
public static void main(String[] args) {
Imej a = new Imej();
a.readimage();
}

You should create a mean function, that returns the mean of the pixel. Something like int mean(int[][] pixels, int i, int j). Then you should change the first line in your if statement to be if (mean(pixel, i, j) > mean) { //...
if (pixel[i][j] > mean)
pixel[i][j]=1;
Should be
if (mean(pixel, i, j) > mean)
pixel[i][j]=0xFFFFFF;

Related

Can't get gray scale of an image in java

I have a problem with getting gray scale of a .jpg file. I am trying to create a new .jpg file as gray scaled but I am just copying the image nothing more. Here is my code:
package training01;
import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
import javax.swing.JFrame;
public class GrayScale {
BufferedImage image;
int width;
int height;
public GrayScale() {
try {
File input = new File("digital_image_processing.jpg");
image = ImageIO.read(input);
width = image.getWidth();
height = image.getHeight();
for(int i = width;i < width;i++) {
for(int j = height;j < height;j++) {
Color c = new Color(image.getRGB(i, j));
int red = c.getRed();
int green = c.getGreen();
int blue = c.getBlue();
int val = (red+green+blue)/3;
Color temp = new Color(val,val,val);
image.setRGB(i, j, temp.getRGB());
}
}
File output = new File("digital_image_processing1.jpg");
ImageIO.write(image, "jpg", output);
}catch(Exception e) {
System.out.println(e);
}
}
public static void main(String[] args) {
GrayScale gs = new GrayScale();
}
}
You need to change the following. Start your i and j at 0.
for(int i = width;i < width;i++) {
for(int j = height;j < height;j++) {
However, here is a faster way to do it. Write it to a new BufferedImage object that is set for gray scale.
image = ImageIO.read(input);
width = image.getWidth();
height = image.getHeight();
bwImage = new BufferedImage(width,
height, BufferedImage.TYPE_BYTE_GRAY);
Graphics g = bwImage.getGraphics();
g.drawImage(image,0,0,null);
Then save the bwImage.
The main problem with your code, is that it won't loop, because you initialize i, j to width, height which is already greater than the exit condition of the for loops (i < width, j < height). Start iterating at 0 by initializing i and j to 0, and your code will work as intended.
For better performance, you also want to change the order of the loops. As BufferedImages are stored as a continuous array, row by row, you will utilize the CPU cache much better if you loop over the x axis (row) in the inner loop.
Side note: I also suggest renaming i and j to x and y for better readability.
Finally, your method of converting RGB to gray by averaging the colors will work, but is not the most common way to convert to gray scale, as the human eye does not perceive the intensities of the colors as the same. See Wikipedia on gray scale conversion for a better understanding of correct conversion and the theory behind it.
However, all of this said, for JPEG images stored as YCbCr (the most common way to store JPEGs), there is a much faster, memory efficient and simpler way of converting the image to gray scale, and that is simply reading the Y (luminance) channel of the JPEG and use that as gray scale directly.
Using Java and ImageIO, you can do it like this:
public class GrayJPEG {
public static void main(String[] args) throws IOException {
try (ImageInputStream stream = ImageIO.createImageInputStream(new File(args[0]))) {
ImageReader reader = ImageIO.getImageReaders(stream).next(); // Will throw exception if no reader available
try {
reader.setInput(stream);
ImageReadParam param = reader.getDefaultReadParam();
// The QnD way, just specify the gray type directly
//param.setDestinationType(ImageTypeSpecifier.createFromBufferedImageType(BufferedImage.TYPE_BYTE_GRAY));
// The very correct way, query the reader if it supports gray, and use that
Iterator<ImageTypeSpecifier> types = reader.getImageTypes(0);
while (types.hasNext()) {
ImageTypeSpecifier type = types.next();
if (type.getColorModel().getColorSpace().getType() == ColorSpace.TYPE_GRAY) {
param.setDestinationType(type);
break;
}
}
BufferedImage image = reader.read(0, param);
ImageIO.write(image, "JPEG", new File(args[0] + "_gray.jpg"));
}
finally {
reader.dispose();
}
}
}
}

Java recoloring BufferedImage not working with an image of a larger height

I have a program that is supposed to take the RGB values of an image and then multiply them by some constants, and then draw the new image on a JPanel. The problem is that if my image is over a certain height, specifically over 187 pixels, the new colored image is different than an image with a height of less than 187px.
The JPanel shows this: example.
Notice how the longer recolored image is different than the shorter one. I'm sure that the shorter image's colors are correct, and I have no idea how it's getting messed up.
public class RecolorImage extends JPanel {
public static int scale = 3;
public static BufferedImage walk, walkRecolored;
public static BufferedImage shortWalk, shortWalkRecolored;
public static void main(String[] args) {
JFrame frame = new JFrame();
frame.setSize(200*scale, 400*scale);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().add(new RecolorImage());
walk = ImageLoader.loadImage("/playerWalk.png");
walkRecolored = recolor(walk);
shortWalk = ImageLoader.loadImage("/playerWalkShort.png");
shortWalkRecolored = recolor(shortWalk);
frame.setVisible(true);
}
#Override
public void paint(Graphics graphics) {
Graphics2D g = (Graphics2D) graphics;
g.scale(scale, scale);
g.drawImage(walk, 10, 10, null);
g.drawImage(walkRecolored, 40, 10, null);
g.drawImage(shortWalk, 70, 10, null);
g.drawImage(shortWalkRecolored, 100, 10, null);
}
The recolor method:
public static BufferedImage recolor(BufferedImage image) {
BufferedImage outputImage = deepCopy(image);
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
int rgb = image.getRGB(x, y);
Color c = new Color(rgb);
int r = c.getRed();
int g = c.getGreen();
int b = c.getBlue();
r *= 0.791;
g *= 0.590;
b *= 0.513;
int newRGB = (rgb & 0xff000000) | (r << 16) | (g << 8) | b;
outputImage.setRGB(x, y, newRGB);
}
}
return outputImage;
}
How I load the images and make deep copies:
public static BufferedImage loadImage(String path) {
try {
return ImageIO.read(ImageLoader.class.getResource(path));
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
public static BufferedImage deepCopy(BufferedImage image) {
ColorModel colorModel = image.getColorModel();
boolean isAlphaPremultiplied = colorModel.isAlphaPremultiplied();
WritableRaster raster = image.copyData(null);
return new BufferedImage(colorModel, raster, isAlphaPremultiplied, null);
}
My original images: the tall image and short image. Thanks for any help!
Your source images have different color models:
the short image uses 4 bytes per pixel (RGB and alpha)
the tall image uses 1 byte per pixel (index into a palette of 256 colors)
Your recolored images use the same color model as the source images (thanks to the deepCopy method), therefore the recolored image for the tall image also uses the same color palette as the source image, meaning that it cannot contain all the colors you want.
Since your recoloring code overwrites each pixel of the output image anyway the deep copy operation is unnecessary. Instead you would better create a full color image as target image like this:
public static BufferedImage recolor(BufferedImage image) {
BufferedImage outputImage = new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
//... code as before
}

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds setRGB

I am new to programming and I'm currently working on a program that rotates an image to the right and upside down. I was able to get the upside down method working but not the rotate to the right (90 degrees clockwise). It keeps giving me an out of bounds error, and I'm not sure why as I have looked at other examples. Any help would be appreciated.
Here's is the method that I'm working on:
public Image rotateRight()
{
Image right = new Image (this);
img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
int width = right.img.getWidth();
int height = right.img.getHeight();
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
this.img.setRGB(height-j-1,i,right.img.getRGB(i,j));
}
return right;
}
Here's the rest of the code:
import java.awt.image.*;
import java.io.*;
import javax.imageio.*;
public class Image {
private BufferedImage img = null;
int width;
int height;
private Image()
{
}
public Image (int w, int h)
{
img = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB );
width = w;
height = h;
}
public Image (Image anotherImg)
{
width = anotherImg.img.getWidth();
height = anotherImg.img.getHeight();
img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB );
for (int i = 0; i < height; i++)
for (int j = 0; j < width; j++)
{
this.img.setRGB(j,i,anotherImg.img.getRGB(j,i)) ;
}
}
public String toString()
{
return "Width of Image:" +width+"\nHeight of Image:"+height;
}
public Image (String filename)
{
try
{
img = ImageIO.read(new File(filename));
width = img.getWidth();
height = img.getHeight();
}
catch (IOException e)
{
System.out.println(e);
}
}
public void save(String filename, String extension)
{
try
{
File outputfile = new File(filename);
ImageIO.write(img, extension, outputfile);
}
catch (IOException e)
{
System.out.println(e);
}
}
public Image copy()
{
Image copiedImage = new Image(this);
return copiedImage;
}
Here's Main:
public static void main (String args[])
{
Image srcimg = new Image("apple.jpg");
System.out.println(srcimg);
Image copy = srcimg.copy();
copy.save("apple-copy.jpg", "jpg");
Image copy2 = srcimg.copy();
Image right = copy2.rotateRight();
right.save("apple-rotateRight.jpg", "jpg");
}
The reason you are getting an ArrayIndexOutOfBoundsException when rotating your image is as stated. Something is out of bounds. It could be either your i variable that has exceeded its bounds or your j variable that has exceeded its bounds and this is generally easy to test for by just adding a print statement within your for loop and checking which one of the two values is out of bounds. It is good practice to try to resolve these problems yourself as you will start learning what causes these and where the problem lies.
Anyways enough of my rambling. The problem that you seem to have is that you are trying to turn the image without changing the size of the image.
You are creating a new Image with the same width and height parameters as the original
img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB );
However when you want to rotate an image by 90 degrees you actually want to flip the width and height parameters. If you think about it, it makes sense when you rotate an image by 90 degrees the width will become the height and the height will become the width.
So your problem is here:
this.img.setRGB(height-j-1,i,right.img.getRGB(i,j));
In your case the bounds for the x parameter in the setRGB function is from 0 to the WIDTH of your image and the y parameter is from 0 to the HEIGHT of your image. Therefore because your height variable is different from your width. If for example your WIDTH is 200 and your HEIGHT is 100. When you put this in to the function the greatest value for the x parameter will be:
'100 - 199 - 1 = -100' which is clearly out of bounds. However if we change your code to.
img = new BufferedImage(height, width, BufferedImage.TYPE_INT_RGB );
now when we do the same thing as before where we get the maximum possible value.
WIDTH = 100, HEIGHT = 200;
'200 - 99 - 1 = 100' which is inside the bounds

Java: represent a color image as an array of gray scale intensity values

Is there a built-in utils to represent a color (or gray scale image) as an array of the gray scale intensity values?
If not, suppose I have a BufferedImage img, how can I represent this image as an array like this:
int[][] intensity = toIntensity(img);
How should the toIntensity() method look like?
There is a way to do it without doing your toIntensity() thing:
public static BufferedImage convertToType(BufferedImage sourceImage, int targetType) {
BufferedImage image;
if (sourceImage.getType() == targetType) {
image = sourceImage;
}else {
image = new BufferedImage(sourceImage.getWidth(), sourceImage.getHeight(), targetType);
image.getGraphics().drawImage(sourceImage, 0, 0, null);
}
return image;
}
Call this method with the targetType as: BufferedImage.TYPE_BYTE_GRAY
If you need it in another type you can then convert it back and it will retain its gray color.
toIntensity method could be something like this
public int[][] toIntensity(BufferedImage img){
int[][] cols = new int[img.getWidth()][img.getHeight()];
for(int z = 0;z < img.getWidth();z++){
for(int a = 0;a < img.getHeight();a++){
int color = img.getRGB(z, a);
cols[z][a] = color;
}
}
return cols;
}

Java BufferedImage setRGB, getRGB error

I am editing a BufferedImage.
After altering the pixel in the picture, I do a check to ensure the new value is what I expected it to be. However, they have not changed to the designated pixel Color!
I thought it could be something to do with the Alpha value, so I recently added a step that extracts the Alpha value from the original pixel, and ensures that value is used when creating the new Color to be inserted back into the image.
System.out.println(newColors[0] + ", " + newColors[1] + ", " + newColors[2]);
Color oldColor = new Color(image.getRGB(x, y));
Color newColor = new Color(newColors[0], newColors[1], newColors[2], oldColor.getAlpha()); // create a new color from the RGB values.
image.setRGB(x, y, newColor.getRGB());// set the RGB of the pixel in the image.
for (int col : getRGBs(x,y)) {
System.out.println(col);
}
The method getRGBs() returns an array where
index 0 is the Red value
index 1 is green
index 2 is blue.
The output looks like:
206, 207, 207
204
203
203
As you can see, the values 206, 207, 207 come back out of the image as 204, 203, 203 - in fact, every pixel I change comes back out as 204, 203, 203.
What am I doing wrong? It just doesn't make sense.
Thanks in advance!
I found my own answer online, I'll summarise it below:
In BufferedImages with a ColorModel the pixel is set to the nearest colour chosen. That means that you might not get the colour you wanted because the colours you can set are limited to the colours in the ColorModel.
You can get around that by creating your own BufferedImage and draw the source image onto that and then manipulate those pixels.
BufferedImage original = ImageIO.read(new File(file.getPath()));
image= new BufferedImage(original.getWidth(), original.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
image.getGraphics().drawImage(original, 0, 0, null);
for(int y = 0; y < original.getHeight(); y++){
for(int x = 0; x < original.getWidth(); x++){
image.setRGB(x,y, original.getRGB(x,y));
}
}
That solved the problem. Clearly, the ColorModel did not have the colours I had specified and thus adjusted the pixel to the nearest colour it could.
Source
I guess what you are looking for is WritableRaster which helps to write to the image read.
Use ImageIO to write the final changes on to a new file or give the same file for altering.
public class ImageTest {
BufferedImage image;
File imageFile = new File("C:\\Test\\test.bmp");
public ImageTest() {
try {
image = ImageIO.read(imageFile);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public void editImage() throws IOException {
WritableRaster wr = image.getRaster();
int width = image.getWidth();
int height = image.getHeight();
for(int ii=0; ii<width; ii++) {
for(int jj=0; jj<height; jj++) {
int color = image.getRGB(ii, jj);
wr.setSample(ii, jj, 0 , 156);
}
}
ImageIO.write(image, "BMP", new File("C:\\Test\\test.bmp"));
}
public static void main(String[] args) throws IOException {
ImageTest test = new ImageTest();
test.editImage();
}
}

Categories

Resources