Java Bufferedimage setRgb getRgb, 2 different results - java

i´m trying to convert a image into a matrix and convert it back, but the 2 pictures are different:
convert it into a matrix:
public int[][] getMatrixOfImage(BufferedImage bufferedImage) {
int width = bufferedImage.getWidth(null);
int height = bufferedImage.getHeight(null);
int[][] pixels = new int[width][height];
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
pixels[i][j] = bufferedImage.getRGB(i, j);
}
}
return pixels;
}
and convert it back into a bufferedImage:
public BufferedImage matrixToBufferedImage(int[][] matrix) {
int width=matrix[0].length;
int height=matrix.length;
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB_PRE);
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix[0].length; j++) {
int pixel=matrix[i][j] <<24|matrix[i][j] <<16|matrix[i][j]<<8|matrix[i][j] ;
bufferedImage.setRGB(i, j, pixel);
}
}
return bufferedImage;
}
with this result:
http://img59.imageshack.us/img59/5464/mt8a.png
Thanks!

Why do you do
int pixel=matrix[i][j] <<24|matrix[i][j] <<16|matrix[i][j]<<8|matrix[i][j];
instead of just
int pixel=matrix[i][j];
?

Related

Convolution Kernel - image comes out as a mirror image

So I have some code for convoluting GreyScale image, in Java using Convolution Kernel. It seems to work reasonably well. However the image comes out as a mirror image. As if copying from end of the row rather than the start. I wonder can anyone help me understand what's happening here.
The problem appears to be in the convertToArrayLocation() method as if I try to recreate an image from the array they this method produces the image is mirrored.
public class GetTwoDimensionalPixelArray {
public static BufferedImage inputImage, output;
public static final int[][] IDENTITY = {{0, 0, 0}, {0, 1, 0}, {0, 0, 0}};
public static final int[][] EDGE_DETECTION_1 = {{-1, -1, -1}, {-1, 8, -1}, {-1, -1, -1}};
public static int[][] SHARPEN = {{0, -1, 0}, {-1, 5, -1}, {0, -1, 0}};
public static int WIDTH, HEIGHT;
public static int order = SHARPEN.length;
public static void main(String[] args) throws IOException {
System.out.println(WIDTH);
BufferedImage inputImage = ImageIO.read(new File("it-gs.png")); // load the image from this current folder
WIDTH = inputImage.getWidth();
HEIGHT = inputImage.getHeight();
int[][] result = convertToArrayLocation(inputImage); // pass buffered image to the method and get back the
// result
System.out.println("height" + result.length + "width" + result[0].length);
int[][] outputarray = convolution2D(result, WIDTH, HEIGHT, EDGE_DETECTION_1, EDGE_DETECTION_1.length,
EDGE_DETECTION_1.length);
int opwidth = outputarray[0].length;
int opheight = outputarray.length;
System.out.println("W" + opwidth + "H" + opheight);
BufferedImage img = new BufferedImage(opheight, opwidth, BufferedImage.TYPE_BYTE_GRAY);
for (int r = 0; r < opheight; r++) {
for (int t = 0; t < opwidth; t++) {
img.setRGB(r, t, outputarray[r][t]);
}
}
try {
File imageFile = new File("C:\\Users\\ciara\\eclipse-workspace\\it.png");
ImageIO.write(img, "png", imageFile);
} catch (Exception e) {
System.out.println(e);
}
}
private static int[][] convertToArrayLocation(BufferedImage inputImage) {
final byte[] pixels = ((DataBufferByte) inputImage.getRaster().getDataBuffer()).getData();
// get pixel value as single array from buffered Image
final int width = inputImage.getWidth(); // get image width value
final int height = inputImage.getHeight(); // get image height value
System.out.println("height" + height + "width");
int[][] result = new int[height][width]; // Initialize the array with height and width
// this loop allocates pixels value to two dimensional array
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel++) {
int argb = 0;
argb = (int) pixels[pixel];
// if pixel value is negative, change to positive //still weird to me
if (argb < 0) {
argb += 256;
}
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
return result;
}
public static int[][] convolution2D(int[][] input, int width, int height,
int[][] kernel, int kernelWidth, int kernelHeight) {
int smallWidth = width - kernelWidth + 1;
int smallHeight = height - kernelHeight + 1;
int[][] output = new int[smallHeight][smallWidth];
for (int i = 0; i < smallHeight; ++i) {
for (int j = 0; j < smallWidth; ++j) {
output[i][j] = 0;
}
}
for (int i = 0; i < smallHeight; ++i) {
for (int j = 0; j < smallWidth; ++j) {
output[i][j] = singlePixelConvolution(input, i, j, kernel, kernelWidth, kernelHeight);
}
}
return output;
}
public static int singlePixelConvolution(int[][] input, int x, int y, int[][] k,
int kernelWidth, int kernelHeight) {
int output = 0;
for (int i = 0; i < kernelHeight; ++i) {
for (int j = 0; j < kernelWidth; ++j) {
try {
output = output + (input[x + i][y + j] * k[i][j]);
} catch (Exception e) {
continue;
}
}
}
return output;
}
}
As you probably already know now, this is not an error but the expected result for convolution. Convolution mirror its output unlike correlation that does not. https://en.wikipedia.org/wiki/Convolution

Java BufferedImage write / read (different rgb values)

I wrote an image with this code:
BufferedImage newImage = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
index = 0;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
int r = ciphered[index++];
int g = ciphered[index++];
int b = ciphered[index++];
Color newColor = new Color(r, g, b);
newImage.setRGB(j, i, newColor.getRGB());
}
}
File ouptut = new File("/Users/newbie/Desktop/encrypted.jpg");
ImageIO.write(newImage, "jpg", ouptut);
When I try to read the image ("encrypted.jpg") I get different RGB values. I read the image with the following code:
File input = new File("/Users/newbie/Desktop/encrypted.jpg");
BufferedImage image = new BufferedImage(512, 512, BufferedImage.TYPE_INT_RGB);
image = ImageIO.read(input);
int[] t = new int[width * height * 3];
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
Color c = new Color(image.getRGB(j, i));
int r = c.getRed();
int g = c.getGreen();
int b = c.getBlue();
t[index++] = r;
t[index++] = g;
t[index++] = b;
}
}
I don't understand what I'm doing wrong. I just get different rgb values from the ones I've inserted.

Why does this code only rotate squares?

I will use this algorithm for image rotation, however I realized that it only rotates squares, not rectangles.
Would anyone know why?
Main code-problem:
public static int[] rotate(double angle, int[] pixels, int width, int height) {
final double radians = Math.toRadians(angle);
final double cos = Math.cos(radians);
final double sin = Math.sin(radians);
final int[] pixels2 = new int[pixels.length];
for(int pixel = 0; pixel < pixels2.length; pixel++) {
pixels2[pixel] = 0xFFFFFF;
}
for(int x = 0; x < width; x++) {
for(int y = 0; y < height; y++) {
final int centerx = width / 2;
final int centery = height / 2;
final int m = x - centerx;
final int n = y - centery;
final int j = ((int) ( m * cos + n * sin ) ) + centerx;
final int k = ((int) ( n * cos - m * sin ) ) + centery;
if( j >= 0 && j < width && k >= 0 && k < height ){
pixels2[ ( y * width + x ) ] = pixels[ ( k * width + j ) ];
}
}
}
return pixels2;
}
Context application:
try {
BufferedImage testrot = ImageIO.read(new File("./32x32.png"));
int[] linearpixels = new int[testrot.getWidth() * testrot.getHeight()];
int c = 0;
for(int i = 0; i < testrot.getWidth(); i++){
for(int j = 0; j < testrot.getHeight(); j++){
linearpixels[c] = testrot.getRGB(i, j);
c++;
}
}
int[] lintestrot = rotate(50, linearpixels, 32, 32);
BufferedImage image = new BufferedImage(70, 70, BufferedImage.TYPE_INT_RGB);
c = 0;
for(int i = 0; i < 32; i++){
for(int j = 0; j < 32; j++){
image.setRGB(i, j, lintestrot[c]);
c++;
}
}
File outputfile = new File("test002.bmp");
ImageIO.write(image, "bmp", outputfile);
} catch (IOException e1) {
e1.printStackTrace();
}
If you alter to 33 width or height the result will be wrong (wrong image).
You algorithm actually does work. The problem is with your loops in your context application. Because the pixels are stored in raster order, the outer loop needs to iterate to the height and the inner loop iterates to the width, e.g:
for(int i = 0; i < testrot.getHeight(); i++){
for(int j = 0; j < testrot.getWidth(); j++){
linearpixels[c] = testrot.getRGB(j, i); //edit here, tested
c++;
}
}
Then if you change height to 40 for example:
int[] lintestrot = rotate(50, linearpixels, 32, 40);
The loops need to change like this:
c = 0;
for(int i = 0; i < 40; i++){
for(int j = 0; j < 32; j++){
image.setRGB(i, j, lintestrot[c]);
c++;
}
}
Note that the order is reversed in the loops (height then width) compared to the function call (width then height).

Java mirror image diagonal method not working

I'm having trouble getting my method to work. The method should mirror any image I choose on its diagonal to produce a mirror effect, but at the moment it just produces the same image unedited and I don't what I'm doing wrong. Any help would be greatly appreciated. Thank you.
public Picture mirrorImageDiagonal() {
int size = this.getWidth();
Pixel rightPixel = null;
Pixel leftTargetPixel = null;
Pixel rightTargetPixel = null;
Picture target = new Picture(size, size);
for (double x = 0; x < size; x ++) {
for (double y = 0; y <= x; y ++) {
int yIndex = Math.min((int) y, this.getHeight() - 1);
int xIndex = Math.min((int) x, this.getWidth() - 1);
leftTargetPixel = target.getPixel(yIndex, xIndex);
rightTargetPixel = target.getPixel(xIndex, yIndex);
rightPixel = this.getPixel(xIndex, yIndex);
rightTargetPixel.setColor(rightPixel.getColor());
leftTargetPixel.setColor(rightPixel.getColor());
}
}
return target;
}
I am assuming that you are trying to complete the challenge for A6 in the picture lab packet. I just completed this for school, but if you are not, I hope this still helps you.
public void mirrorDiagonal()
{
Pixel[][] pixels = this.getPixels2D();
Pixel pixel1 = null;
Pixel pixel2 = null;
int width = pixels[0].length;
for (int row = 0; row < pixels.length; row++)
{
for (int col = 0; col < width; col++)
{
if (col < pixels.length)
{
pixel1 = pixels[row][col];
pixel2 = pixels[col][row];
pixel1.setColor(pixel2.getColor());
}
}
}
}

Convert Image to Grayscale with array matrix RGB java

I'm creating an Image filter program and I want to convert a coloured picture to a grayscale picture with the help of an array matrix.
This is what I have currently:
import java.awt.Color;
import se.lth.cs.ptdc.images.ImageFilter;
public class GrayScaleFilter extends ImageFilter {
public GrayScaleFilter(String name){
super(name);
}
public Color[][] apply(Color[][] inPixels, double paramValue){
int height = inPixels.length;
int width = inPixels[0].length;
Color[][] outPixels = new Color[height][width];
for (int i = 0; i < 256; i++) {
grayLevels[i] = new Color(i, i, i);
}
for(int i = 0; i < height; i++){
for(int j = 0; j < width; j++){
Color pixel = inPixels[i][j];
outPixels[i][j] = grayLevels[index];
}
}
return outPixels;
}
}
It looks like I'm supposed to use this formula: ((R+G+B)/3)
I want to create an array matrix like this:
Color[] grayLevels = new Color[256];
// creates the color (0,0,0) and puts it in grayLevels[0],
// (1,1,1) in grayLevels[1], ..., (255,255,255) in grayLevels[255]
This is the class I'm refering too when I want to use grascale:
public abstract Color[][] apply(Color[][] inPixels, double paramValue);
protected short[][] computeIntensity(Color[][] pixels) {
int height = pixels.length;
int width = pixels[0].length;
short[][] intensity = new short[height][width];
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
Color c = pixels[i][j];
intensity[i][j] = (short) ((c.getRed() + c.getGreen() + c
.getBlue()) / 3);
}
}
return intensity;
}
Any feedback on how I can achieve this? Instead of using outPixels[i][j] = new Color(intensity, intensity, intensity);
Build the grayLevels array this way:
for (int i = 0; i < 256; i++) {
grayLevels[i] = new Color(i, i, i);
}
Then, when you need a certain color, just retrieve it as grayLevels[index].

Categories

Resources