Java Why is my resulting image always flipped and rotated? - java

I'm trying to access each pixel, manipulate it then save it back to the system. But the resulting image is always flipped and rotated, why is this?
Here is my code for input:
BufferedImage input_image=ImageIO.read(new File("F:\\sophie4.png"));
int result[][] = convertTo2DWithoutUsingGetRGB(input_image);
private static int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}
That works, i get the pixels, but even without processing the pixels, if i output the image, its always flipped.. Here is my output code:
BufferedImage image = new BufferedImage(result.length, result[0].length, BufferedImage.TYPE_INT_RGB);
for (int row = 0; row < result.length; row ++) {
for (int col = 0; col < result[row].length; col++) {
image.setRGB(row, col, result[row][col]);
}
}
File ImageFile = new File("path");
try {
ImageIO.write(image, "png", ImageFile);
} catch (IOException e) {
e.printStackTrace();
}
You can see the input and output image below

You're getting confused (or at least I am), because you're resulting array is height by width (not width x height which makes more sense to me), so, instead of...
BufferedImage image = new BufferedImage(result.length, result[0].length, BufferedImage.TYPE_INT_RGB);
it should be...
BufferedImage image = new BufferedImage(result[0].length, result.length, BufferedImage.TYPE_INT_RGB);
and
image.setRGB(row, col, result[row][col]);
should be
image.setRGB(col, row, result[row][col]); // See why that's consfusing

Related

Sobel Operator MaskX

Why my image quality after masking x get worse?
public void doMaskX() {
int[][] maskX = { { -1, -2, -1 }, { 0, 0, 0 }, { 1, 2, 1 } };
int rgb, alpha = 0;
int[][] square = new int[3][3];
for (int y = 0; y < width - 3; y++) {
for (int x = 0; x < height - 3; x++) {
int sum = 0;
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
rgb = imgx.getRGB(y + i, x + j);
alpha = (rgb >> 24) & 0xff;
int red = (rgb >> 16) & 0xff;
int green = (rgb >> 8) & 0xff;
int blue = rgb & 0xff;
square[i][j] = (red + green + blue)/3;
sum += square[i][j] * maskX[i][j];
}
}
rgb = (alpha << 24) | (sum << 16) | (sum << 8) | sum;
imgx.setRGB(y, x, rgb);
}
}
writeImg();
}
the quality should be better of second image and why is yellow color appears?
It is important to realize that you are computing the intensity of the gradient here and that is what you are displaying. Therefore the intensity (or magnitude) is a positive number. You have to take the absolute value:
sum=Math.abs(sum);
If you take the y derivative also then you can combine:
sum=Math.sqrt(sumx*sumx+sumy*sumy);

Need to illuminate 20-pixels from top, bottom, right and left of an image and then I need to compare pixel values

I need to determinate whether the given image is blank or it has the same pixel values please find the below code. here I want to set a tolerance. I don't want to pass top, bottom, left and right 20 pixels to this logic. please help!
for (String pic : Finallist) {
BufferedImage image = ImageIO.read(new File(pic));
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
boolean blankImage=true;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
if(result[row][col]!=result[0][0]) {
blankImage=false;
}
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
if(result[row][col]!=result[0][0]) {
blankImage=false;
}
col++;
if (col == width) {
col = 0;
row++;
}
}
}
if(blankImage==true) {
try {
System.out.println("Blank image found and its deleted");
File f = new File(pic);
f.delete();
} catch(Exception e) {
System.out.println("Exception"+e);
}
}else {
FinalListWithOutBlank.add(pic);
}
}
I want every thing on the air!! so that my code performance will not be in the pain.. I just want to skip those pixel to reach out into this logic..
Copy the desired area of interest into another image with
BufferedImage imageForEvaluation = image.getSubimage(x, y, width, height);
and use this image for your logic.

Image to byte[] to Image

I'm trying to do some image processing using with Java.
As a start, before doing any filters or anything, I'm doing a convert process on my image to a byte array, then convert it back to an image and save it to see how that goes.
I'm not getting the output image as the input one, there is some lost information/data, which causing the output to look different in colors.
Please tell me what is the problem; what I am missing.
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Ola {
BufferedImage img = null;
public void loadImage() {
try {
img = ImageIO.read(new File("/home/a/Pictures/Tux-vegeta.png"));
} catch (IOException e) {
System.out.println("image(s) could not load correctly, try changing the path");
}
}
public byte[] convertToArray() {
int w = img.getWidth();
int h = img.getHeight();
int bands = img.getSampleModel().getNumBands();
System.out.print(bands);
if (bands != 4) {
System.out.println("The image does not have 4 color bands");
}
byte bytes[] = new byte[4 * w * h];
int index = 0;
for(int y = 0; y < h; y++) {
for(int x = 0; x < w; x++) {
int pixel = img.getRGB(x, y);
int alpha = (pixel >> 24) & 0xFF;
int red = (pixel >> 16) & 0xFF;
int green = (pixel >> 8) & 0xFF;
int blue = pixel & 0xFF;
bytes[index++] = (byte) alpha;
bytes[index++] = (byte) red;
bytes[index++] = (byte) green;
bytes[index++] = (byte) blue;
}
}
return bytes;
}
public void convertToImage(byte[] bytes) {
try {
int w = 300;
int h = 300;
int index = 0;
BufferedImage resultPNG = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < h; i++) {
for (int j = 0; j < w; j ++) {
int pixel = (bytes[index] << 24) | (bytes[index + 1] << 16) | (bytes[index + 2] << 8) | (bytes[index + 3]);
resultPNG.setRGB(j, i, pixel);
index += 4;
}
}
File outputImage = new File("/home/a![enter image description here][1]/Pictures/test.png");
ImageIO.write(resultPNG, "png", outputImage);
} catch (IOException e) {
System.out.println("image write error");
}
}
public static void main(String[] args) {
Ola ola = new Ola();
ola.loadImage();
ola.convertToImage(ola.convertToArray());
}
}
what you are missing is turning your signed byte back to unsigned:
change your line
int pixel = (bytes[index] << 24) | (bytes[index + 1] << 16) | (bytes[index + 2] << 8) | (bytes[index + 3]);
to the following:
int pixel = ((bytes[index] & 0xFF) << 24) | ((bytes[index + 1] & 0xFF) << 16) | ((bytes[index + 2] & 0xFF) << 8) | (bytes[index + 3] & 0xFF);
Since you are wanting the alpha channel your destination should be using TYPE_INT_ARGB instead of TYPE_INT_RGB, using RGB will cause the buffered image to ignore the alpha byte.
Since PNGs do not load into the TYPE_INT_ARGB color model you can use a graphics object to draw the loaded bufferedimage into a bufferedimage object created with TYPE_INT_ARGB.
public void loadImage() {
try {
BufferedImage tempimg = ImageIO.read(new File("/home/a/Pictures/Tux-vegeta.png"));
img = new BufferedImage(300, 300, BufferedImage.TYPE_INT_ARGB);
Graphics2D g2 = img.createGraphics();
g2.drawImage(tempimg,null,0,0);
} catch (IOException e) {
System.out.println("image(s) could not load correctly, try changing the path");
}
}

Java BufferedImage Not Registering Transparent Pixels?

I have used the method ImageIO.read(File file); to read a PNG image file. However, when I use the getRGB(int x, int y) method on it to extract the alpha it always returns 255 whether the pixel is transparent or not. How do I remedy this inconvenience?
When converting packed int colors to Color objects, you need to tell it if it should calculate the alpha value or not.
new Color(image.getRGB(x, y), true).getAlpha();
See Color(int, boolean) for more details
Just wanted to point out that using the method getRGB(x,y) is extremely inefficient. If you want to get the pixels of an image you could extract the colours from each individual pixel and then store the pixel in an int array. Credit also to mota for explaining why this is inefficient see his post . Example below:
/**===============================================================================================
* Method that extracts pixel data from an image
* #return a 2d array representing the pixels of the image
=================================================================================================*/
public static int[][] getImageData(BufferedImage img) {
int height = img.getHeight();
int width = img.getWidth();
final byte[] imgPixels = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
final boolean is_Alpha_Present = img.getAlphaRaster() != null;
int[][] imgArr = new int[height][width];
if (is_Alpha_Present) {
final int pixelLength = 4; //number of bytes used to represent a pixel if alpha value present
for (int pixel = 0, row = 0, col = 0; pixel < imgPixels.length; pixel = pixel + pixelLength) {
int argb = 0;
argb += (((int) imgPixels[pixel] & 0xff) << 24); //getting the alpha for the pixel
argb += ((int) imgPixels[pixel + 1] & 0xff); //getting the blue colour
argb += (((int) imgPixels[pixel + 2] & 0xff) << 8); //getting the green colour
argb += (((int) imgPixels[pixel + 3] & 0xff) << 16); //getting the red colour
imgArr[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < imgPixels.length; pixel = pixel + pixelLength) {
int argb = 0;
argb += Integer.MIN_VALUE;
argb += ((int) imgPixels[pixel] & 0xff); //getting the blue colour
argb += (((int) imgPixels[pixel+1] & 0xff) << 8); //getting the green colour
argb += (((int) imgPixels[pixel+2] & 0xff) << 16); //getting the red colour
imgArr[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return imgArr;
}

Convert a BufferedImage into a 2D array

I have a BufferedImage which represents a 2048X2048 pixels tiff image. I wish to retrieve such array (int [2048][2048] from the BufferedImage. How should I proceed?
arr = new int[2048][2048];
for(int i = 0; i < 2048; i++)
for(int j = 0; j < 2048; j++)
arr[i][j] = image.getRGB(i, j);
Since you can get the RGB value of each pixel from the image data structure itself, it might be a benefit to NOT copy everything over to a 2d array.
This method will return the red, green and blue values directly for each pixel, and if there is an alpha channel it will add the alpha value. Using this method is harder in terms of calculating indices, but is much faster than the first approach.
private static int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}

Categories

Resources