How to create valid image from RGB pixel values using java - java

I have a 2D array which contains the RGB values. I need to create a valid image from these pixel values and save it. I have given the 2D array below. I wanted to implement this part in my project, so please help me with this. Thank you.
int[] pixels = new int[imageSize * 3];
int k = 0;
for(int i=0; i<height; i++)
{
for(int j=0; j<width; j++)
{
if(k<imageSize*3)
{
pixels[k] = r[i][j];
pixels[k+1] = g[i][j];
pixels[k+2] = b[i][j];
}
k = k+3;
}
}

You can build a BufferedImage of type BufferedImage.TYPE_INT_RGB. This type represents a color as an int where:
3rd byte (16-23) is red,
2nd byte (8-15) is green and
1st byte (7-0) is blue.
You can get the pixel RGB value as follows:
int rgb = red;
rgb = (rgb << 8) + green;
rgb = (rgb << 8) + blue;
Example (Ideone full example code):
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int rgb = r[y][x];
rgb = (rgb << 8) + g[y][x];
rgb = (rgb << 8) + b[y][x];
image.setRGB(x, y, rgb);
}
}
File outputFile = new File("/output.bmp");
ImageIO.write(image, "bmp", outputFile);

Related

Getting Pixel Values from Byte Array

Im having trouble getting pixel data.
My program takes screenshots, every loop it will store the previous screenshot.
My goal is to do a comparison at the pixel level between the current screenshot and the old one.
Ive ran this code which tells me what format the screenshots are in:
System.out.println(image.getType());
The output of this (for my program) is 1 meaning its a BufferedImage.TYPE_INT_RGB
From what ive read, the types determine what order the pixel values are in the byte array.
I'm using this code to convert my Buffered image to a byte array (The buffered image is created using awt.Robot class):
public byte[] convertToByteArray(BufferedImage img){
byte[] imageInByte = null;
try {
// convert BufferedImage to byte array
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(img, "png", baos);
baos.flush();
imageInByte = baos.toByteArray();
baos.close();
} catch (IOException ex) {
Logger.getLogger(OverlayWindow.class.getName()).log(Level.SEVERE, null, ex);
}
return imageInByte;
}
Finally i use a comparison method to check the byte array. For now this only prints the color values of the array:
final byte[] pixels = convertToByteArray(image);
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < 1; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
int r = (argb >> 16) & 0xff, g = (argb >> 8)& 0xff, b = argb & 0xff;
System.out.println("R = " + r);
System.out.println("G = " + g);
System.out.println("B = " + b);
col++;
if (col == width) {
col = 0;
row++;
}
}
My issue with this code is that even though i take a screenshot of a solid color, the pixel values are all over the place. Im expecting each pixel to have the same color values.
-Edit-
I'm avoiding using getRGB for performance reasons. Iterating through two large images calling getRGB each time is very costly in my application.
The easiest way to access the pixels values in a BufferedImage is to use the Raster:
BufferedImage image = ...
for (int y=0 ; y < image.getHeight() ; y++)
for (int x=0 ; x < image.getWidth() ; x++)
for (int c=0 ; c < image.getRaster().getNumBands() ; c++)
final int value = image.getRaster().getSample(x, y, c) ; // Returns the value of the channel C of the pixel (x,y)
The raster will take care of the encoding for you, making it the easiest way to access the pixel values. However, the fastest way is to use the DataBuffer, but then you have to manage all the encodings.
/* This method takes a BufferedImage encoded with TYPE_INT_ARGB and copies the pixel values into an image encoded with TYPE_4BYTE_ABGR.*/
public static void IntToByte(BufferedImage source, BufferedImage result)
{
final byte[] bb = ((DataBufferByte)result.getRaster().getDataBuffer()).getData() ;
final int[] ib = ((DataBufferInt)source.getRaster().getDataBuffer()).getData() ;
switch ( source.getType() )
{
case BufferedImage.TYPE_INT_ARGB :
for (int i=0, b=0 ; i < ib.length ; i++, b+=4)
{
int p = ib[i] ;
bb[b] = (byte)((p & 0xFF000000) >> 24) ;
bb[b+3] = (byte)((p & 0xFF0000) >> 16) ;
bb[b+2] = (byte)((p & 0xFF00) >> 8) ;
bb[b+1] = (byte)( p & 0xFF) ;
}
break ;
// Many other case to manage...
}
}

Getting the color of a pixel [duplicate]

Given an image file, say of PNG format, how to I get an array of int [r,g,b,a] representing the pixel located at row i, column j?
So far I am starting here:
private static int[][][] getPixels(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
int[][][] result = new int[height][width][4];
// SOLUTION GOES HERE....
}
Thanks in advance!
You need to get the packed pixel value as an int, you can then use Color(int, boolean) to build a color object from which you can extract the RGBA values, for example...
private static int[][][] getPixels(BufferedImage image) {
int[][][] result = new int[height][width][4];
for (int x = 0; x < image.getWidth(); x++) {
for (int y = 0; y < image.getHeight(); y++) {
Color c = new Color(image.getRGB(i, j), true);
result[y][x][0] = c.getRed();
result[y][x][1] = c.getGreen();
result[y][x][2] = c.getBlue();
result[y][x][3] = c.getAlpha();
}
}
}
It's not the most efficient method, but it is one of the simplest
BufferedImages have a method called getRGB(int x, int y) which returns an int where each byte is the components of the pixel (alpha, red, green and blue). If you dont want to do the bitwise operators yourself you can use Colors.getRed/Green/Blue methods by creating a new instance of Java.awt.Color with the int from getRGB.
You can do this in a loop to fill the three-dimensional array.
This is my code for this problem:
File f = new File(filePath);//image path with image name like "lena.jpg"
img = ImageIO.read(f);
if (img==null) //if img null return
return;
//3d array [x][y][a,r,g,b]
int [][][]pixel3DArray= new int[img.getWidth()][img.getHeight()][4];
for (int x = 0; x < img.getWidth(); x++) {
for (int y = 0; y < img.getHeight(); y++) {
int px = img.getRGB(x,y); //get pixel on x,y location
//get alpha;
pixel3DArray[x][y][0] =(px >> 24)& 0xff; //shift number and mask
//get red
pixel3DArray[x][y][1] =(px >> 16)& 0xff;
//get green
pixel3DArray[x][y][2] =(px >> 8)& 0xff;
//get blue
pixel3DArray[x][y][3] =(px >> 0)& 0xff;
}
}

Using BufferedImage to read and write to an image file

Below is the following code that reads in RGB values using BufferedImage, and then simply writes them back out again to file. The resultant image is perfect, and looks good. No worries there.
I run a print test to print out the first 10 RBG int values. This is to test the "test.png" file, and then to test the resultant image - "new-test.png".
For some reason I am getting different RBG values between the two files.
E.g. (The first 3 RGB int values)
test.png : -16704215, -16704215, -16704215
new-test.png : -16638935, -16638935, -16573142
Can anyone identify to why I am getting different RBG values that printed out for both test files?
try
{
BufferedImage imgBuf = ImageIO.read(new File("test.png"));//also testing with GIFs, JPEGs
int w = imgBuf.getWidth();
int h = imgBuf.getHeight();
int[] RGBarray = imgBuf.getRGB(0,0,w,h,null,0,w);
//Arrays to store their respective component values
int [][] redPixels = new int [h][w];
int [][] greenPixels = new int [h][w];
int [][] bluePixels = new int [h][w];
for(int i=0; i<=10; i++)
{
//print out the first 10 RGB int values - testing purposes
System.out.println(RGBarray[i]);
}
//Separate the RGB int values into 3 array, red, green and blue ....
int x=0;
for(int row=0; row<h; row++)
{
for(int col=0; col<w; col++)
{
redPixels[row][col] = ((RGBarray[x]>>16)&0xff);
greenPixels[row][col] = ((RGBarray[x]>>8)&0xff);
bluePixels[row][col] = (RGBarray[x]&0xff);
x++;
}
}
//set pixels back using the setRBG() ...
BufferedImage bufferedImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
for(int row=0; row<h; row++)
{
for(int col=0; col<w; col++)
{
//use bit shifting to re-form the RGB int again
int rgb = (redPixels[row][col] & 0xff) << 16 | (greenPixels[row][col] & 0xff) << 8 | (bluePixels[row][col] & 0xff);
bufferedImage.setRGB(col, row, rgb);
}
}
}
catch(IOException i){}; // This exception format is only temporary !
Here is small modification of your code that also saves the image and then reads it back again. With this code I get exactly the same int values before and after.
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import org.junit.Test;
public class PngReadWriteTest {
#Test
public void test(){
try
{
BufferedImage imgBuf = ImageIO.read(new File("test.png"));//also testing with GIFs, JPEGs
int w = imgBuf.getWidth();
int h = imgBuf.getHeight();
int[] RGBarray = imgBuf.getRGB(0,0,w,h,null,0,w);
//Arrays to store their respective component values
int [][] redPixels = new int [h][w];
int [][] greenPixels = new int [h][w];
int [][] bluePixels = new int [h][w];
System.out.println("IMAGE FIRST READ:");
printPixelValues(imgBuf);
//Separate the RGB int values into 3 array, red, green and blue ....
int x=0;
for(int row=0; row<h; row++)
{
for(int col=0; col<w; col++)
{
redPixels[row][col] = ((RGBarray[x]>>16)&0xff);
greenPixels[row][col] = ((RGBarray[x]>>8)&0xff);
bluePixels[row][col] = (RGBarray[x]&0xff);
x++;
}
}
//set pixels back using the setRBG() ...
BufferedImage bufferedImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
for(int row=0; row<h; row++)
{
for(int col=0; col<w; col++)
{
//use bit shifting to re-form the RGB int again
int rgb = (redPixels[row][col] & 0xff) << 16 | (greenPixels[row][col] & 0xff) << 8 | (bluePixels[row][col] & 0xff);
bufferedImage.setRGB(col, row, rgb);
}
}
System.out.println("IMAGE BEFORE SAVE:");
printPixelValues(bufferedImage);
//Save image as png
ImageIO.write(bufferedImage, "png", new File("new-test.png"));
BufferedImage newImage = ImageIO.read(new File("new-test.png"));//also testing with GIFs, JPEGs
System.out.println("IMAGE AFTER SECOND READ:");
printPixelValues(newImage);
}
catch(IOException i){}; // This exception format is only temporary !
}
private void printPixelValues(BufferedImage image){
int w = image.getWidth();
int h = image.getHeight();
int[] RGBarray = image.getRGB(0,0,w,h,null,0,w);
for(int i=0; i<=10; i++)
{
//print out the first 10 RGB int values - testing purposes
System.out.println(RGBarray[i]);
}
}
}
This will only work if you use PNG since it is lossless. If you use JPG you will not get the same result since it is lossy. when you save the file as a JPG it will not result in a file containing the exact same bytes as in your first image. The level of compression will affect the image. So even if the files looks just the same by eye, they will not be exactly the same.

pixel value representation for the image in java

I am trying to read the pixel values of the image using the below code
int[] pixel;
BufferedImage imageA = ImageIO.read(new File("xyz.bmp"));
for (int y = 0; y < imageA.getHeight(); ++y) {
for (int x = 0; x < imageA.getWidth(); ++x) {
pixel = imageA.getRaster().getPixel(x, y, new int[3]);
}}
values of RGB are stored in pixel[0],pixel[1] and pixel[2] respectively and when i saw the output i saw the values ranging between 0 to 255.
I saw some use the below code to get pixel values
int pixel;
BufferedImage imageA = ImageIO.read(new File("xyz.bmp"));
for (int y = 0; y < imageA.getHeight(); ++y) {
for (int x = 0; x < imageA.getWidth(); ++x) {
pixel = imageA.getRGB(x, y);
}}
When i saw the output for a particular pixel it was -14935264 . What does this value represent and whats the difference between the above two methods.
In the second case, you get an int that contains the RGB-values in the lower 24 bits. The red component is bits 23-16, the green component is bits 15-8 and the blue component is bits 7-0.
If you want to get the components out of the int:
int red = (pixel >> 16) & 0xFF;
int green = (pixel >> 8) & 0xFF;
int blue = pixel & 0xFF;
The other way around:
int pixel = (red << 16) | (green << 8) | blue;

Saving two BufferedImages within a LayeredPane

I am working with a LayeredPane that contains two images, one on each layer. I have been working on a method that obtains the image positions (based on the label positions the images are in) and then save the bottom image (lower one from within the layeredPane) and also the top one if it is at all covering the bottom image (may only be a part of the image), however I have been having some trouble with this and I'm a bit unsure on how to get it working properly.
I have been stuck working on this for quite a while now so any help with my existing code or thoughts on how I should approach this another way would be a big help for me.
Thanks in advance.
public void saveImageLayering(BufferedImage topImg,BufferedImage bottomImg, JLabel topLabel, JLabel bottomLabel) {
int width = bottomImg.getWidth();
int height = bottomImg.getHeight();
Point bottomPoint = new Point();
Point topPoint = new Point();
bottomPoint = bottomLabel.getLocation();
topPoint = topLabel.getLocation();
System.out.println("image x coordinate " + bottomPoint.x);
System.out.println("image y coordinate " + bottomPoint.y);
//arrays to store the bottom image
int bottomRedImgArray[][] = new int[width][height];
int bottomGreenImgArray[][] = new int[width][height];
int bottomBlueImgArray[][] = new int[width][height];
//arrays to store the top image
int topRedImgArray[][] = new int[width][height];
int topGreenImgArray[][] = new int[width][height];
int topBlueImgArray[][] = new int[width][height];
//loop through the bottom image and get all pixels rgb values
for(int i = bottomPoint.x; i < width; i++){
for(int j = bottomPoint.y; j < height; j++){
//set pixel equal to the RGB value of the pixel being looked at
pixel = new Color(bottomImg.getRGB(i, j));
//contain the RGB values in the respective RGB arrays
bottomRedImgArray[i][j] = pixel.getRed();
bottomGreenImgArray[i][j] = pixel.getGreen();
bottomBlueImgArray[i][j] = pixel.getBlue();
}
}
//create new image the same size as old
BufferedImage newBottomImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
//set values within the 2d array to the new image
for (int x1 = 0; x1 < width; x1++){
for (int y1 = 0; y1 < height; y1++){
//putting values back into buffered image
int newPixel = (int) bottomRedImgArray[x1][y1];
newPixel = (newPixel << 8) + (int) bottomGreenImgArray[x1][y1];
newPixel = (newPixel << 8) + (int) bottomBlueImgArray[x1][y1];
newBottomImage.setRGB(x1, y1, newPixel);
}
}
//create rectangle around bottom image to check if coordinates of top in inside and save only the ones that are
Rectangle rec = new Rectangle(bottomPoint.x, bottomPoint.y, bottomImg.getWidth(), bottomImg.getHeight());
//loop through the top image and get all pixels rgb values
for(int i = bottomPoint.x; i < bottomImg.getWidth(); i++){
for(int j = bottomPoint.y; j < bottomImg.getHeight(); j++){
//if top image is inside lower image then getRGB values
if (rec.contains(topPoint)) { //___________________________________________________________doesnt contain any..
if (firstPointFound == true) {
//set pixel equal to the RGB value of the pixel being looked at
pixel = new Color(topImg.getRGB(i, j));
//contain the RGB values in the respective RGB arrays
topRedImgArray[i][j] = pixel.getRed();
topGreenImgArray[i][j] = pixel.getGreen();
topBlueImgArray[i][j] = pixel.getBlue();
} else {
firstPoint = new Point(i, j);
firstPointFound = true;
}
}
}
}
//create new image the same size as old
BufferedImage newTopImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
//set values within the 2d array to the new image
for (int x1 = 0; x1 < topImg.getWidth(); x1++){
for (int y1 = 0; y1 < topImg.getHeight(); y1++){
//putting values back into buffered image
int newPixel = (int) topRedImgArray[x1][y1];
newPixel = (newPixel << 8) + (int) topGreenImgArray[x1][y1];
newPixel = (newPixel << 8) + (int) topBlueImgArray[x1][y1];
newTopImage.setRGB(x1, y1, newPixel);
}
}
BufferedImage newImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//uses the Graphics.drawImage() to place them on top of each other
Graphics g = newImage.getGraphics();
g.drawImage(newBottomImage, bottomPoint.x, bottomPoint.y, null);
g.drawImage(newTopImage, firstPoint.x, firstPoint.y, null);
try {
//then save as image once all in correct order
File outputfile = new File("saved_Layered_Image.png");
ImageIO.write(newImage, "png", outputfile);
JOptionPane.showMessageDialog(null, "New image saved successfully");
} catch (IOException e) {
e.printStackTrace();
}
}
I'm not really sure why you're messing around with the pixels, however, the idea is relatively simple
Basically, you want to create a third "merged" image which is the same size as the bottomImage. From there, you simply want to paint the bottomImage onto the merged image at 0x0.
Then you need to calculate the distance that the topImage is away from bottomImage's location and paint it at that point.
BufferedImage merged = new BufferedImage(bottomImg.getWidth(), bottomImg.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g2d = master.createGraphics();
g2d.drawImage(bottomImg, 0, 0, this);
int x = topPoint .x - bottomPoint .x;
int y = topPoint .y - bottomPoint .y;
g2d.drawImage(topImg, x, y, this);
g2d.dispose();
Using this basic idea, I was able to produce these...

Categories

Resources