As we read a RGB image , do the shifting operations to get the R, G and B matrices separately ...Is it possible to read a gray scale image(JPEG) and directly manipulate its pixel values.And then rewrite the image ?
Ultimately I have to do the DCT operation on the gray scale image.
The code below will read the grayscale image to simple two dimensional array:
File file = new File("path/to/file");
BufferedImage img = ImageIO.read(file);
int width = img.getWidth();
int height = img.getHeight();
int[][] imgArr = new int[width][height];
Raster raster = img.getData();
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
imgArr[i][j] = raster.getSample(i, j, 0);
}
}
Note: The raster.getSample(...) method takes 3 arguments: x - the X coordinate of the pixel location, y - the Y coordinate of the pixel location, b - the band to return. In case of the grayscale images we should/may get only the 0 band.
Related
Given an image file, say of PNG format, how to I get an array of int [r,g,b,a] representing the pixel located at row i, column j?
So far I am starting here:
private static int[][][] getPixels(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
int[][][] result = new int[height][width][4];
// SOLUTION GOES HERE....
}
Thanks in advance!
You need to get the packed pixel value as an int, you can then use Color(int, boolean) to build a color object from which you can extract the RGBA values, for example...
private static int[][][] getPixels(BufferedImage image) {
int[][][] result = new int[height][width][4];
for (int x = 0; x < image.getWidth(); x++) {
for (int y = 0; y < image.getHeight(); y++) {
Color c = new Color(image.getRGB(i, j), true);
result[y][x][0] = c.getRed();
result[y][x][1] = c.getGreen();
result[y][x][2] = c.getBlue();
result[y][x][3] = c.getAlpha();
}
}
}
It's not the most efficient method, but it is one of the simplest
BufferedImages have a method called getRGB(int x, int y) which returns an int where each byte is the components of the pixel (alpha, red, green and blue). If you dont want to do the bitwise operators yourself you can use Colors.getRed/Green/Blue methods by creating a new instance of Java.awt.Color with the int from getRGB.
You can do this in a loop to fill the three-dimensional array.
This is my code for this problem:
File f = new File(filePath);//image path with image name like "lena.jpg"
img = ImageIO.read(f);
if (img==null) //if img null return
return;
//3d array [x][y][a,r,g,b]
int [][][]pixel3DArray= new int[img.getWidth()][img.getHeight()][4];
for (int x = 0; x < img.getWidth(); x++) {
for (int y = 0; y < img.getHeight(); y++) {
int px = img.getRGB(x,y); //get pixel on x,y location
//get alpha;
pixel3DArray[x][y][0] =(px >> 24)& 0xff; //shift number and mask
//get red
pixel3DArray[x][y][1] =(px >> 16)& 0xff;
//get green
pixel3DArray[x][y][2] =(px >> 8)& 0xff;
//get blue
pixel3DArray[x][y][3] =(px >> 0)& 0xff;
}
}
I am trying to create a Java function to make a bulging effect on an image by shifting the pixel to the relative centre of the image. I first take the (x,y) coordinate of the pixel, find the relative shift, x = x-(x/2) and convert it to polar form [rcos(a), rsin(a)]. r is found by: r = Math.sqrt(xx + yy). Angle a is found using Math.atan2(y/x). New radius (r') is found using r' = 2r^1.5 . However, the new x,y values from [rcos(a), rsin(a)] exceed the dimensions of the image, and errors occur.
Am I making a fundamental mistake?
public void bulge()
{
double xval, yval = 0;
//loop through the columns
for(int x = 0; x < this.getWidth(); x++)
{
//loop through the rows
for(int y = 0; y < this.getHeight(); y++)
{
int redValue, greenValue, blueValue = 0;
double newRadius = 0;
Pixel pixel = this.getPixel(x,y);
redValue = pixel.getRed();
greenValue = pixel.getGreen();
blueValue = pixel.getBlue();
xval = x - (x/2);
yval = y - (y/2);
double radius = Math.sqrt(xval*xval + yval*yval);
double angle = Math.atan2(yval, xval);
newRadius = 2*(Math.pow(radius,1.5));
xval = (int)(newRadius*Math.sin(angle));
yval = (int)(newRadius*Math.cos(angle));
Pixel pixelNewPos = this.getPixel((int)xval, (int)yval);
pixelNewPos.setColor(new Color(redValue, greenValue, blueValue));
}
}
}
It's a lot easier to successfully apply a transform from source image A to destination image B by doing the reverse transform from pixels in image B to pixels in image A.
By this I mean for each pixel in destination image B, determine the pixel or pixels in source image A that contribute to the color. That way you don't end up with a whole bunch of pixels in the target image that haven't been touched.
As an example using a linear scaling operation by 2, a simple implementation might look like this:
for (int x = 0; x < sourceWidth; ++x) {
for (int y = 0; y < sourceHeight; ++y) {
Pixel sourcePixel = sourceImage.getPixel(x, y);
int destPixelX = x * 2;
int destPixelY = y * 2;
destImage.setPixel(destPixelX, destPixelY, sourcePixel);
}
}
It should be clear from this code that pixels with either odd numbers X or Y values will not be set in the destination image.
A better way would be something like this:
for (int x = 0; x < destWidth; ++x) {
for (int y = 0; y < destHeight; ++y) {
int sourcePixelX = x / 2;
int sourcePixelY = y / 2;
Pixel sourcePixel = sourceImage.getPixel(sourcePixelX, sourcePixelY);
destImage.setPixel(x, y, sourcePixel);
}
}
Although this is not a good image upscaling algorithm in general, it does show how to make sure that all the pixels in your target image are set.
Am I making a fundamental mistake?
At a conceptual level, yes. Your algorithm is taking a rectangular image and moving the location of the pixels to give a larger, non-rectagular image. Obviously that won't fit into your original rectangle.
So you either need to clip (i.e. discard) the pixels that fall outside of the rectangle, or you need to use a larger rectangle so that all of the mapped pixels fall inside it.
In the latter case, there will be gaps around the edges ...if your transformation is doing what you claim it does. A non-linear transformation of a rectangle is not going to have straight sides.
I have the following code to read a black-white picture in java.
imageg = ImageIO.read(new File(path));
BufferedImage bufferedImage = new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_USHORT_GRAY);
Graphics g = bufferedImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
int w = img.getWidth();
int h = img.getHeight();
int[][] array = new int[w][h];
for (int j = 0; j < w; j++) {
for (int k = 0; k < h; k++) {
array[j][k] = img.getRGB(j, k);
System.out.print(array[j][k]);
}
}
As you can see I have set the type of BufferedImage into TYPE_USHORT_GRAY and I expect that I see the numbers between 0 and 255 in the two D array mattrix. but I will see '-1' and another large integer. Can anyone highlight my mistake please?
As already mentioned in comments and answers, the mistake is using the getRGB() method which converts your pixel values to packed int format in default sRGB color space (TYPE_INT_ARGB). In this format, -1 is the same as ยด0xffffffff`, which means pure white.
To access your unsigned short pixel data directly, try:
int w = img.getWidth();
int h = img.getHeight();
DataBufferUShort buffer = (DataBufferUShort) img.getRaster().getDataBuffer(); // Safe cast as img is of type TYPE_USHORT_GRAY
// Conveniently, the buffer already contains the data array
short[] arrayUShort = buffer.getData();
// Access it like:
int grayPixel = arrayUShort[x + y * w] & 0xffff;
// ...or alternatively, if you like to re-arrange the data to a 2-dimensional array:
int[][] array = new int[w][h];
// Note: I switched the loop order to access pixels in more natural order
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
array[x][y] = buffer.getElem(x + y * w);
System.out.print(array[x][y]);
}
}
// Access it like:
grayPixel = array[x][y];
PS: It's probably still a good idea to look at the second link provided by #blackSmith, for proper color to gray conversion. ;-)
A BufferedImage of type TYPE_USHORT_GRAY as its name says stores pixels using 16 bits (size of short is 16 bits). The range 0..255 is only 8 bits, so the colors may be well beyond 255.
And BufferedImage.getRGB() does not return these 16 pixel data bits but quoting from its javadoc:
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace.
getRGB() will always return the pixel in RGB format regardless of the type of the BufferedImage.
We are trying to get only the portion of the image out of the captured image. But in java we only get subimage in rectangular form using image.getImage(x,y,width, height). Let say if i virutally split the image as 10 parts as shown below. How can i able to extract only 1,2,4,6,8,9,10 out of it as show in the second image using native java very without consuming too many resources and time.
Update
Below is the sample code
for (int x = 0; x < columns; x++) {
for (int y = 0; y < rows; y++) {
imagePart = img.getSubimage(x * this.smallWidth, y
* this.smallHeight, this.smallWidth,
this.smallHeight);
if (!ifSelectedPart(imagePart)) {
smallImages[x][y] = imagePart;
}
else {
smallImages[x][y] = fillwithAlpha();
}
}
createImage(smallImages[][])
If these rectangles are all the same size you can treat the image as a grid and calculate what region of the image you need with a little math.
int numberColumns = 2;
int numberRows = 5;
public Rectangle getSubregion(int row, int column, int imgWidth, int imgHeight){
int cellWidth = imgWidth / numberColumns;
int cellHeight = imgHeight / numberRows;
return new Rectangle(column*cellWidth, row*cellHeight,cellWidth, cellHeight);
}
//usage
Rectangle cellOne = getSubregion(0, 0, img.getWidth(),img.getHeight());
Then just render each of those subregions to a new image in memory.
Images are by their nature rectangular. Perhaps you wish to draw over the image with 0 alpha composite color to cover up the region that you don't want to see. Either that or create a grid of rectangular sub-images, and keep the ones from the grid that you want to display.
i have an 2D integer array, which i get from the BufferedImage method "getRGB()".
when i try to convert the 2D integer array back to BufferdImage, i get only a black picture.
This method
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix[0].length; j++) {
int pixel=matrix[i][j];
System.out.println("The pixel in Matrix: "+pixel);
bufferedImage.setRGB(i, j, pixel);
System.out.println("The pixel in BufferedImage: "+bufferedImage.getRGB(i, j));
}
}
give this output:
The pixel in Matrix: 0
The pixel in BufferedImage: -16777216
The pixel in Matrix: 721420288
The pixel in BufferedImage: -16777216
The pixel in Matrix: 738197504
The pixel in BufferedImage: -16777216
The pixel in Matrix: 520093696
The pixel in BufferedImage: -16777216
The pixel in Matrix: 503316480
The pixel in BufferedImage: -16777216
why is every Pixel "-16777216"?
Thanks!
UPDATE
the method which returns the integer Matrix
public int[][] getMatrixOfImage(BufferedImage bufferedImage) {
int width = bufferedImage.getWidth(null);
int height = bufferedImage.getHeight(null);
int[][] pixels = new int[width][height];
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
pixels[i][j] = bufferedImage.getRGB(i, j);
}
}
return pixels;
}
All your pixels seem to be black with different alpha values. You have to use TYPE_INT_ARGB to not lose the alpha channel.
If you are using TYPE_INT_RGB, you can do it this way:
BufferedImage.getRaster().setPixels(xCord, YCord, Width, Height, IntArray);