In the following code am trying to read a grayscale image, store the pixel values in a 2D array and rewrite the image with a different name.
The code is
package dct;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.awt.image.Raster;
import java.io.File;
import javax.imageio.ImageIO;
public class writeGrayScale
{
public static void main(String[] args)
{
File file = new File("lightning.jpg");
BufferedImage img = null;
try
{
img = ImageIO.read(file);
}
catch(Exception e)
{
e.printStackTrace();
}
int width = img.getWidth();
int height = img.getHeight();
int[][] arr = new int[width][height];
Raster raster = img.getData();
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
arr[i][j] = raster.getSample(i, j, 0);
}
}
BufferedImage image = new BufferedImage(256, 256, BufferedImage.TYPE_BYTE_GRAY);
byte[] raster1 = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(arr,0,raster1,0,raster1.length);
//
BufferedImage image1 = image;
try
{
File ouptut = new File("grayscale.jpg");
ImageIO.write(image1, "jpg", ouptut);
}
catch(Exception e)
{
e.printStackTrace();
}
}// main
}// class
For this code , the error is
Exception in thread "main" java.lang.ArrayStoreException
at java.lang.System.arraycopy(Native Method)
at dct.writeGrayScale.main(writeGrayScale.java:49)
Java Result: 1
How to remove this error to write the grayscale image?
I found this: "ArrayStoreException -- if an element in the src array could not be stored into the dest array because of a type mismatch." http://www.tutorialspoint.com/java/lang/system_arraycopy.htm
Two thoughts:
You're copying an int-array into a byte-array.
That's not part of the exceptions, but are the dimensions right? arr is a two-dimensional array, raster1 is a one-dimensional array.
And you can't just change the byte-array in a two-dimensional one ignoring the output of the method you're calling.
Change int[][] arr to byte[] arr like this.
byte[] arr = new byte[width * height * 4];
for (int i = 0, z = 0; i < width; i++) {
for (int j = 0; j < height; j++, z += 4) {
int v = getSample(i, j, 0);
for (int k = 3; k >= 0; --k) {
arr[z + k] = (byte)(v & 0xff);
v >>= 8;
}
}
}
Related
I am trying to save a depth map from Kinect v2 which should come out as grayscale but every time that I try to save it as JPG file using the type BufferedImage.TYPE_USHORT_GRAY literally nothing happens (No warning on screen or in the console).
I manage to save it if I use types BufferedImage.TYPE_USHORT_555_RGB or BufferedImage.TYPE_USHORT_565_RGB but instead of being grayscale it come as out blueish or greenish depth maps.
Find below the code sample:
short[] depth = myKinect.getDepthFrame();
int DHeight=424;
int DWidth = 512;
int dx=0;
int dy = 21;
BufferedImage bufferDepth = new BufferedImage(DWidth, DHeight, BufferedImage.TYPE_USHORT_GRAY);
try {
ImageIO.write(bufferDepth, "jpg", outputFileD);
} catch (IOException e) {
e.printStackTrace();
}
Is there anything I am doing wrong to save it in grayscale?
Thanks in advance
You have to assign your data (depth) to the BufferedImage (bufferDepth) first.
A simple way to do this is:
short[] depth = myKinect.getDepthFrame();
int DHeight = 424;
int DWidth = 512;
int dx = 0;
int dy = 21;
BufferedImage bufferDepth = new BufferedImage(DWidth, DHeight, BufferedImage.TYPE_USHORT_GRAY);
for (int j = 0; j < DHeight; j++) {
for (int i = 0; i < DWidth; i++) {
int index = i + j * DWidth;
short value = depth[index];
Color color = new Color(value, value, value);
bufferDepth.setRGB(i, j, color.getRGB());
}
}
try {
ImageIO.write(bufferDepth, "jpg", outputFileD);
} catch (IOException e) {
e.printStackTrace();
}
I have a code that puts an image in a pixel matrix.I have to split this image into four parts and get 4 different image files for these parts.
Then I have to do some image processing on it and then join those parts back together in a single image.
Please help me in achieving this.
Note:the image is colored and we want to just split it into 4 equal parts and then get it back as one.No changes needed.Here is the code to get four matrices of intensities.But I don't know what to do with it.It might be that there is no need of it at all.
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
class Optimization
{
public static void main(String[] args) throws IOException
{
BufferedImage hugeImage = ImageIO.read(new File("comp.jpg"));
final byte[] pixels = ((DataBufferByte) hugeImage.getRaster().getDataBuffer()).getData();
int width = hugeImage.getWidth();
int height = hugeImage.getHeight();
if(width%2!=0)
width=width-1;
if(height%2!=0)
height=height-1;
//System.out.print(width+" "+height);
int intensity[][]=new int[width][height];
int b1[][]=new int[width/2][height/2];
int b2[][]=new int[width/2][height/2];
int b3[][]=new int[width/2][height/2];
int b4[][]=new int[width/2][height/2];
int x1=0,y1=0,x2=0,y2=0,x3=0,x4=0,y3=0,y4=0;
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength)
{
int a1,a2,a3;
a3= ((int) pixels[pixel] & 0xff); // blue
a2= (((int) pixels[pixel + 1] & 0xff)); // green
a1= (((int) pixels[pixel + 2] & 0xff)); // red
int i=(a1+a2+a3)/3;
intensity[col][row]=i;
if((col<=width/2-1)&&(row<=height/2-1))
{
b1[x1][y1]=i;
x1++;
if(col==width/2-1)
{
x1=0;
y1++;
}
}
if((col<width)&&(row<=height/2-1)&&(col>width/2-1))
{
b2[x2][y2]=i;
x2++;
if(col==width-1)
{
x2=0;
y2++;
}
}
if((col<width/2)&&(row<height)&&(row>=height/2))
{
b3[x3][y3]=i;
x3++;
if(col==width/2-1)
{
x3=0;
y3++;
}
}
if((col>width/2-1)&&(row>height/2-1))
{
b4[x4][y4]=i;
x4++;
if(col==width-1)
{
x4=0;
y4++;
}
}
col++;
if (col == width)
{
col = 0;
row++;
}
}
for(int m=0;m<height/2;m++)
{
for(int n=0;n<width/2;n++)
{
System.out.print(b1[n][m]+" ");
}
System.out.println();
}
}
}
java.awt.Image provides you getSubimage(int x, int y, int w, int h)
"Returns a subimage defined by a specified rectangular region." You just return four equal regions and thats it.
You can use getSubImage(int x, int y, int w, int h), but you will get an image that shares the same raster than the original image, meaning that if you modify the new sub-image, then you also modify the original image. If it's not problematic for you, then use it.
Else, as you have already accessed the DataBuffer (good idea), here is a simple code to do what you want (I just make DataBuffer copies, and it works when the image dimensions are odd):
BufferedImage image = ... ;
BufferedImage q1 = new BufferedImage(image.getWidth()/2, image.getHeight()/2, image.getType()) ;
BufferedImage q2 = new BufferedImage(image.getWidth()-image.getWidth()/2, image.getHeight()/2, image.getType()) ;
BufferedImage q3 = new BufferedImage(image.getWidth()/2, image.getHeight()-image.getHeight()/2, image.getType()) ;
BufferedImage q4 = new BufferedImage(image.getWidth()-image.getWidth()/2, image.getHeight()-image.getHeight()/2, image.getType()) ;
byte[] bb = ((DataBufferByte)image.getRaster().getDataBuffer()).getData() ;
byte[] bbq1 = ((DataBufferByte)q1.getRaster().getDataBuffer()).getData() ;
byte[] bbq2 = ((DataBufferByte)q2.getRaster().getDataBuffer()).getData() ;
byte[] bbq3 = ((DataBufferByte)q3.getRaster().getDataBuffer()).getData() ;
byte[] bbq4 = ((DataBufferByte)q4.getRaster().getDataBuffer()).getData() ;
for (int y=0 ; y < q1.getHeight() ; y++) // Fill Q1 and Q2
{
System.arraycopy(bb, y*image.getWidth(), bbq1, y*q1.getWidth(), q1.getWidth()) ;
System.arraycopy(bb, y*image.getWidth()+q1.getWidth(), bbq2, y*q2.getWidth(), q2.getWidth()) ;
}
for (int y=0 ; y < q3.getHeight() ; y++) // Fill Q3 and Q4
{
System.arraycopy(bb, (y+q1.getHeight())*image.getWidth(), bbq3, y*q3.getWidth(), q3.getWidth()) ;
System.arraycopy(bb, (y+q1.getHeight())*image.getWidth()+q3.getWidth(), bbq4, y*q4.getWidth(), q4.getWidth()) ;
}
Assume the following matrix acts as both an image and a kernel in a matrix convolution operation:
0 1 2
3 4 5
6 7 8
To calculate the neighbour pixel index you would use the following formula:
neighbourColumn = imageColumn + (maskColumn - centerMaskColumn);
neighbourRow = imageRow + (maskRow - centerMaskRow);
Thus the output of convolution would be:
output1 = {0,1,3,4} x {4,5,7,8} = 58
output2 = {0,1,2,3,4,5} x {3,4,5,6,7,8} = 100
output2 = {1,2,4,5} x {3,4,6,7} = 70
output3 = {0,1,3,4,6,7} x {1,2,4,5,7,8} = 132
output4 = {0,1,2,3,4,5,6,7,8} x {0,1,2,3,4,5,6,7,8} = 204
output5 = {1,2,4,5,7,8} x {0,1,3,4,6,7} = 132
output6 = {3,4,6,7} x {1,2,4,5} = 70
output7 = {3,4,5,6,7,8} x {0,1,2,3,4,5} = 100
output8 = {4,5,7,8} x {0,1,3,4} = 58
Thus the output matrix would be:
58 100 70
132 204 132
70 100 58
Now assume the matrix is flattened to give the following vector:
0 1 2 3 4 5 6 7 8
This vector now acts as an image and a kernel in a vector convolution operation for which the ouput should be:
58 100 70 132 204 132 70 100 58
Given the code below how do you calculate the neighbour element index for the vector such that it corresponds with the same neighbour element in the matrix?
public int[] convolve(int[] image, int[] kernel)
{
int imageValue;
int kernelValue;
int outputValue;
int[] outputImage = new int[image.length()];
// loop through image
for(int i = 0; i < image.length(); i++)
{
outputValue = 0;
// loop through kernel
for(int j = 0; j < kernel.length(); j++)
{
neighbour = ?;
// discard out of bound neighbours
if (neighbour >= 0 && neighbour < imageSize)
{
imageValue = image[neighbour];
kernelValue = kernel[j];
outputValue += imageValue * kernelValue;
}
}
outputImage[i] = outputValue;
}
return output;
}
The neighbour index is computed by offsetting the original pixel index by the difference between the index of the current element and half the size of the matrix. For example, to compute the column index:
int neighbourCol = imageCol + col - (size / 2);
I put a working demo on GitHub, trying to keep the whole convolution algorithm as readable as possible:
int[] dstImage = new int[srcImage.width() * srcImage.height()];
srcImage.forEachElement((image, imageCol, imageRow) -> {
Pixel pixel = new Pixel();
forEachElement((filter, col, row) -> {
int neighbourCol = imageCol + col - (size / 2);
int neighbourRow = imageRow + row - (size / 2);
if (srcImage.hasElementAt(neighbourCol, neighbourRow)) {
int color = srcImage.at(neighbourCol, neighbourRow);
int weight = filter.at(col, row);
pixel.addWeightedColor(color, weight);
}
});
dstImage[(imageRow * srcImage.width() + imageCol)] = pixel.rgb();
});
As you are dealing with 2D images, you will have to retain some information about the images, in addition the the plain 1D pixel array. Particularly, you at least need the width of the image (and the mask) in order to find out which indices in the 1D array correspond to which indices in the original 2D image. And as already pointed out by Raffaele in his answer, there are general rules for the conversions between these ("virtual") 2D coordinates and 1D coordinates in such a pixel array:
int pixelX = ...;
int pixelY = ...;
int index = pixelX + pixelY * imageSizeX;
Based on this, you can do your convolution simply on a 2D image. The limits for the pixels that you may access may easily be checked. The loops are simple 2D loops over the image and the mask. It all boils down to the point where you access the 1D data with the 2D coordinates, as described above.
Here is an example. It applies a Sobel filter to the input image. (There may still be something odd with the pixel values, but the convolution itself and the index computations should be right)
import java.awt.Graphics2D;
import java.awt.GridLayout;
import java.awt.image.BufferedImage;
import java.awt.image.DataBuffer;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.SwingUtilities;
public class ConvolutionWithArrays1D
{
public static void main(String[] args) throws IOException
{
final BufferedImage image =
asGrayscaleImage(ImageIO.read(new File("lena512color.png")));
SwingUtilities.invokeLater(new Runnable()
{
#Override
public void run()
{
createAndShowGUI(image);
}
});
}
private static void createAndShowGUI(BufferedImage image0)
{
JFrame f = new JFrame();
f.getContentPane().setLayout(new GridLayout(1,2));
f.getContentPane().add(new JLabel(new ImageIcon(image0)));
BufferedImage image1 = compute(image0);
f.getContentPane().add(new JLabel(new ImageIcon(image1)));
f.pack();
f.setLocationRelativeTo(null);
f.setVisible(true);
}
private static BufferedImage asGrayscaleImage(BufferedImage image)
{
BufferedImage gray = new BufferedImage(
image.getWidth(), image.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics2D g = gray.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
return gray;
}
private static int[] obtainGrayscaleIntArray(BufferedImage image)
{
BufferedImage gray = new BufferedImage(
image.getWidth(), image.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics2D g = gray.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
DataBuffer dataBuffer = gray.getRaster().getDataBuffer();
DataBufferByte dataBufferByte = (DataBufferByte)dataBuffer;
byte data[] = dataBufferByte.getData();
int result[] = new int[data.length];
for (int i=0; i<data.length; i++)
{
result[i] = data[i];
}
return result;
}
private static BufferedImage createImageFromGrayscaleIntArray(
int array[], int imageSizeX, int imageSizeY)
{
BufferedImage gray = new BufferedImage(
imageSizeX, imageSizeY, BufferedImage.TYPE_BYTE_GRAY);
DataBuffer dataBuffer = gray.getRaster().getDataBuffer();
DataBufferByte dataBufferByte = (DataBufferByte)dataBuffer;
byte data[] = dataBufferByte.getData();
for (int i=0; i<data.length; i++)
{
data[i] = (byte)array[i];
}
return gray;
}
private static BufferedImage compute(BufferedImage image)
{
int imagePixels[] = obtainGrayscaleIntArray(image);
int mask[] =
{
1,0,-1,
2,0,-2,
1,0,-1,
};
int outputPixels[] =
Convolution.filter(imagePixels, image.getWidth(), mask, 3);
return createImageFromGrayscaleIntArray(
outputPixels, image.getWidth(), image.getHeight());
}
}
class Convolution
{
public static final int[] filter(
final int[] image, int imageSizeX,
final int[] mask, int maskSizeX)
{
int imageSizeY = image.length / imageSizeX;
int maskSizeY = mask.length / maskSizeX;
int output[] = new int[image.length];
for (int y=0; y<imageSizeY; y++)
{
for (int x=0; x<imageSizeX; x++)
{
int outputPixelValue = 0;
for (int my=0; my< maskSizeY; my++)
{
for (int mx=0; mx< maskSizeX; mx++)
{
int neighborX = x + mx -maskSizeX / 2;
int neighborY = y + my -maskSizeY / 2;
if (neighborX >= 0 && neighborX < imageSizeX &&
neighborY >= 0 && neighborY < imageSizeY)
{
int imageIndex =
neighborX + neighborY * imageSizeX;
int maskIndex = mx + my * maskSizeX;
int imagePixelValue = image[imageIndex];
int maskPixelValue = mask[maskIndex];
outputPixelValue +=
imagePixelValue * maskPixelValue;
}
}
}
outputPixelValue = truncate(outputPixelValue);
int outputIndex = x + y * imageSizeX;
output[outputIndex] = outputPixelValue;
}
}
return output;
}
private static final int truncate(final int pixelValue)
{
return Math.min(255, Math.max(0, pixelValue));
}
}
I'm trying to implement averaging filter with different sizes 3x3 , 5x5 , 7x7 and 11x11 .. I did the calculations and the results are correct while debugging but the problem is that it is saved in the writable raster in negative, so I'm getting weird results. The second weird thing is that when I want to get the value of the same pixel that was saved in negative, it is retrieved with positive value !!
I'm using int.
What's wrong? any help ?!!
Here is my code for the 5x5 averaging filter.
public static BufferedImage filter5x5_2D(BufferedImage paddedBI , BufferedImage bi , double[][]filter)
{
WritableRaster myImage = paddedBI.copyData(null);
BufferedImage img = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
WritableRaster myImage2 = img.copyData(null);
for(int i =2; i< myImage.getHeight()-2; i++)
{
for(int j =2; j< myImage.getWidth()-2; j++)
{
int value = 0;
int copyi = i-2;
for (int m = 0 ; m<5 ; m++)
{
int copyj = j-2;
for (int n = 0; n<5; n++)
{
int result = myImage.getSample(copyj , copyi, 0);
double f = filter[m][n];
double add = result * filter[m][n];
value += (int) (filter[m][n] * myImage.getSample(copyj , copyi, 0));
copyj ++;
}
copyi++;
//myImage2.setSample(j-1 , i-1, 0, value);
}
myImage2.setSample(j-2 , i-2, 0, value);
//int checkResult = myImage2.getSample(j-1,i-1,0);
}
}
BufferedImage res= new BufferedImage(bi.getWidth(),bi.getHeight(),BufferedImage.TYPE_BYTE_GRAY);
res.setData(myImage2);
return res;
}
I do not find any negative values. Here is my main with what I have tested this code:
public static void main(String[] args) throws IOException {
BufferedImage bi = ImageIO.read(new File("C:/Tmp/test.bmp"));
BufferedImage newImage = new BufferedImage(bi.getWidth()+4, bi.getHeight()+4, bi.getType());
Graphics g = newImage.getGraphics();
g.setColor(Color.white);
g.fillRect(0,0,bi.getWidth()+4,bi.getHeight()+4);
g.drawImage(bi, 2, 2, null);
g.dispose();
double[][] filter = new double[5][5];
for( int i = 0; i < 5; ++i){
for( int j = 0; j < 5; ++j){
filter[i][j] = 1.0/(5*5);
}
}
BufferedImage filtered = filter5x5_2D(newImage, bi, filter);
ImageIO.write(filtered, "bmp", new File("C:/tmp/filtered.bmp"));
}
You should consider that your variables result, f and add are unused. Also it would be better if value would be of type double instead of int. In a worst case, you would get 25 times a value of 11 which will be rounded to zero after multiplying with 1/25. This would result in your code as a grey value of zero while it should result in 11.
I'm having a problem that's been driving me crazy for days. Hopefully, someone here can help me understand what's happening. I'm trying to write a simple Java program that will take a directory of JPEGs, convert them to greyscale, and save them to the same folder.
My procedure is to set the red, green, and blue components of each pixel to that pixel's luminance value. The code runs fine and seems to do what I want. If I view the completed image in a JFrame, it shows up black and white. However, when I save the image (using ImageIO.write()), for some reason, it becomes colorized and looks rather red. I'd love to post the images but I guess my reputation is not good enough...
Since I can't put the images, I'll try to explain it as well as I can. Here's what I know:
If I view the newly created image using the Java program, it appears black and white as I desire.
If I save the image and try to view it using an external program, it does not appear black and white at all and just looks like a watered down version of the original image.
If I open that same saved image (the one that should be black and white but is not) using the Java program, it does indeed appear black and white.
If I save the file as a png instead, everything works fine.
Here's the relevant code I'm using if anyone would like to see it:
import java.io.*;
import javax.swing.*;
import javax.imageio.ImageIO;
import java.awt.*;
import java.awt.image.*;
public class ImageEZ {
public static void displayImage(BufferedImage img) {
class ImageFrame extends JFrame {
ImageFrame(BufferedImage img) {
super();
class ImagePanel extends JPanel {
BufferedImage image;
ImagePanel(BufferedImage image) {
this.image = ImageEZ.duplicate(image);
}
protected void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0, 0, image.getWidth(), image.getHeight(), this);
}
}
ImagePanel panel = new ImagePanel(img);
add(panel);
}
}
JFrame frame = new ImageFrame(img);
frame.setSize(img.getWidth(), img.getHeight());
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
public static BufferedImage duplicate(BufferedImage img) {
BufferedImage dup = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_ARGB);
dup.setRGB(0, 0, img.getWidth(), img.getHeight(), ImageEZ.getRGB(img), 0, img.getWidth());
return dup;
}
public static int[] getRedArray(BufferedImage img) {
int[] tArray = ImageEZ.getRGB(img);
for (int i = 0; i < tArray.length; i++) {
tArray[i] = tArray[i] << 8;
tArray[i] = tArray[i] >>> 24;
}
return tArray;
}
public static int[] getRedArray(int[] tArray) {
int[] nArray = new int[tArray.length];
for (int i = 0; i < tArray.length; i++) {
nArray[i] = tArray[i] << 8;
nArray[i] = nArray[i] >>> 24;
}
return nArray;
}
public static int[] getGreenArray(BufferedImage img) {
int[] tArray = ImageEZ.getRGB(img);
for (int i = 0; i < tArray.length; i++) {
tArray[i] = tArray[i] << 16;
tArray[i] = tArray[i] >>> 24;
}
return tArray;
}
public static int[] getGreenArray(int[] tArray) {
int[] nArray = new int[tArray.length];
for (int i = 0; i < tArray.length; i++) {
nArray[i] = tArray[i] << 16;
nArray[i] = nArray[i] >>> 24;
}
return nArray;
}
public static int[] getBlueArray(BufferedImage img) {
int[] tArray = ImageEZ.getRGB(img);
for (int i = 0; i < tArray.length; i++) {
tArray[i] = tArray[i] << 24;
tArray[i] = tArray[i] >>> 24;
}
return tArray;
}
public static int[] getBlueArray(int[] tArray) {
int[] nArray = new int[tArray.length];
for (int i = 0; i < tArray.length; i++) {
nArray[i] = tArray[i] << 24;
nArray[i] = nArray[i] >>> 24;
}
return nArray;
}
public static int[] YBRtoRGB(int[] ybr) {
int[] y = getRedArray(ybr);
int[] r = getBlueArray(ybr);
int[] b = getGreenArray(ybr);
int[] red = new int[y.length];
int[] green = new int[y.length];
int[] blue = new int[y.length];
for (int i = 0; i < red.length; i++) {
red[i] = (int) (y[i] + 1.402*r[i]);
green[i] = (int) (y[i] + -.344*b[i] + -.714*r[i]);
blue[i] = (int) (y[i] + 1.772*b[i]);
}
int[] RGB = new int[red.length];
for (int i = 0; i < red.length; i++) {
RGB[i] = red[i] << 16 | green[i] << 8 | blue[i] | 255 << 24;
}
return RGB;
}
public static int[] getLumArray(BufferedImage img) {
int[] red = getRedArray(img); //Returns an array of the red values of the pixels
int[] green = getGreenArray(img);
int[] blue = getBlueArray(img);
int[] Y = new int[red.length];
for (int i = 0; i < red.length; i++) {
Y[i] = (int) (.299*red[i] + .587*green[i] + .114*blue[i]);
}
return Y;
}
// Converts an image to greyscale using the luminance of each pixel
public static BufferedImage deSaturate(BufferedImage original) {
BufferedImage deSaturated = new BufferedImage(original.getWidth(),
original.getHeight(),
BufferedImage.TYPE_INT_ARGB);
int[] Y = ImageEZ.getLumArray(original); //Returns an array of the luminances
for (int i = 0; i < Y.length; i++) {
Y[i] = 255 << 24 | Y[i] << 16;
}
int[] rgb = ImageEZ.YBRtoRGB(Y); //Converts the YCbCr colorspace to RGB
deSaturated.setRGB(0, 0, original.getWidth(), original.getHeight(),
rgb, 0, original.getWidth());
return deSaturated;
}
// Takes a folder of JPEGs and converts them to Greyscale
public static void main(String[] args) throws Exception {
File root = new File(args[0]);
File[] list = root.listFiles();
for (int i = 0; i < list.length; i++) {
BufferedImage a = ImageEZ.deSaturate(ImageIO.read(list[i]));
displayImage(a); //Displays the converted images.
boolean v = ImageIO.write(a, "jpg", new File(list[i].getParent() + "\\" + i + ".jpg"));
}
// Displays the first newly saved image
displayImage(ImageIO.read(new File(list[0].getParent() + "\\" + 0 + ".png")));
}
}
I just want to stress, this is not a question about alternative methods for turning making an image black and white. What I really want to know is why it works as a png but not as a jpg. Thanks a lot to all who read this far!
This is a known issue with ImageIO.
When saved/loaded as jpeg, the API doesn't know how to handle the alpha component (as I understand the problem).
The solution is to not write images with an alpha component to jpg format, or use a non-alpha based image, such as TYPE_INT_RGB instead...