negative values in the writableraster - java

I'm trying to implement averaging filter with different sizes 3x3 , 5x5 , 7x7 and 11x11 .. I did the calculations and the results are correct while debugging but the problem is that it is saved in the writable raster in negative, so I'm getting weird results. The second weird thing is that when I want to get the value of the same pixel that was saved in negative, it is retrieved with positive value !!
I'm using int.
What's wrong? any help ?!!
Here is my code for the 5x5 averaging filter.
public static BufferedImage filter5x5_2D(BufferedImage paddedBI , BufferedImage bi , double[][]filter)
{
WritableRaster myImage = paddedBI.copyData(null);
BufferedImage img = new BufferedImage(bi.getWidth(), bi.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
WritableRaster myImage2 = img.copyData(null);
for(int i =2; i< myImage.getHeight()-2; i++)
{
for(int j =2; j< myImage.getWidth()-2; j++)
{
int value = 0;
int copyi = i-2;
for (int m = 0 ; m<5 ; m++)
{
int copyj = j-2;
for (int n = 0; n<5; n++)
{
int result = myImage.getSample(copyj , copyi, 0);
double f = filter[m][n];
double add = result * filter[m][n];
value += (int) (filter[m][n] * myImage.getSample(copyj , copyi, 0));
copyj ++;
}
copyi++;
//myImage2.setSample(j-1 , i-1, 0, value);
}
myImage2.setSample(j-2 , i-2, 0, value);
//int checkResult = myImage2.getSample(j-1,i-1,0);
}
}
BufferedImage res= new BufferedImage(bi.getWidth(),bi.getHeight(),BufferedImage.TYPE_BYTE_GRAY);
res.setData(myImage2);
return res;
}

I do not find any negative values. Here is my main with what I have tested this code:
public static void main(String[] args) throws IOException {
BufferedImage bi = ImageIO.read(new File("C:/Tmp/test.bmp"));
BufferedImage newImage = new BufferedImage(bi.getWidth()+4, bi.getHeight()+4, bi.getType());
Graphics g = newImage.getGraphics();
g.setColor(Color.white);
g.fillRect(0,0,bi.getWidth()+4,bi.getHeight()+4);
g.drawImage(bi, 2, 2, null);
g.dispose();
double[][] filter = new double[5][5];
for( int i = 0; i < 5; ++i){
for( int j = 0; j < 5; ++j){
filter[i][j] = 1.0/(5*5);
}
}
BufferedImage filtered = filter5x5_2D(newImage, bi, filter);
ImageIO.write(filtered, "bmp", new File("C:/tmp/filtered.bmp"));
}
You should consider that your variables result, f and add are unused. Also it would be better if value would be of type double instead of int. In a worst case, you would get 25 times a value of 11 which will be rounded to zero after multiplying with 1/25. This would result in your code as a grey value of zero while it should result in 11.

Related

Convolution - Calculating a Neighbour Element Index for a Vectorised Image

Assume the following matrix acts as both an image and a kernel in a matrix convolution operation:
0 1 2
3 4 5
6 7 8
To calculate the neighbour pixel index you would use the following formula:
neighbourColumn = imageColumn + (maskColumn - centerMaskColumn);
neighbourRow = imageRow + (maskRow - centerMaskRow);
Thus the output of convolution would be:
output1 = {0,1,3,4} x {4,5,7,8} = 58
output2 = {0,1,2,3,4,5} x {3,4,5,6,7,8} = 100
output2 = {1,2,4,5} x {3,4,6,7} = 70
output3 = {0,1,3,4,6,7} x {1,2,4,5,7,8} = 132
output4 = {0,1,2,3,4,5,6,7,8} x {0,1,2,3,4,5,6,7,8} = 204
output5 = {1,2,4,5,7,8} x {0,1,3,4,6,7} = 132
output6 = {3,4,6,7} x {1,2,4,5} = 70
output7 = {3,4,5,6,7,8} x {0,1,2,3,4,5} = 100
output8 = {4,5,7,8} x {0,1,3,4} = 58
Thus the output matrix would be:
58 100 70
132 204 132
70 100 58
Now assume the matrix is flattened to give the following vector:
0 1 2 3 4 5 6 7 8
This vector now acts as an image and a kernel in a vector convolution operation for which the ouput should be:
58 100 70 132 204 132 70 100 58
Given the code below how do you calculate the neighbour element index for the vector such that it corresponds with the same neighbour element in the matrix?
public int[] convolve(int[] image, int[] kernel)
{
int imageValue;
int kernelValue;
int outputValue;
int[] outputImage = new int[image.length()];
// loop through image
for(int i = 0; i < image.length(); i++)
{
outputValue = 0;
// loop through kernel
for(int j = 0; j < kernel.length(); j++)
{
neighbour = ?;
// discard out of bound neighbours
if (neighbour >= 0 && neighbour < imageSize)
{
imageValue = image[neighbour];
kernelValue = kernel[j];
outputValue += imageValue * kernelValue;
}
}
outputImage[i] = outputValue;
}
return output;
}
The neighbour index is computed by offsetting the original pixel index by the difference between the index of the current element and half the size of the matrix. For example, to compute the column index:
int neighbourCol = imageCol + col - (size / 2);
I put a working demo on GitHub, trying to keep the whole convolution algorithm as readable as possible:
int[] dstImage = new int[srcImage.width() * srcImage.height()];
srcImage.forEachElement((image, imageCol, imageRow) -> {
Pixel pixel = new Pixel();
forEachElement((filter, col, row) -> {
int neighbourCol = imageCol + col - (size / 2);
int neighbourRow = imageRow + row - (size / 2);
if (srcImage.hasElementAt(neighbourCol, neighbourRow)) {
int color = srcImage.at(neighbourCol, neighbourRow);
int weight = filter.at(col, row);
pixel.addWeightedColor(color, weight);
}
});
dstImage[(imageRow * srcImage.width() + imageCol)] = pixel.rgb();
});
As you are dealing with 2D images, you will have to retain some information about the images, in addition the the plain 1D pixel array. Particularly, you at least need the width of the image (and the mask) in order to find out which indices in the 1D array correspond to which indices in the original 2D image. And as already pointed out by Raffaele in his answer, there are general rules for the conversions between these ("virtual") 2D coordinates and 1D coordinates in such a pixel array:
int pixelX = ...;
int pixelY = ...;
int index = pixelX + pixelY * imageSizeX;
Based on this, you can do your convolution simply on a 2D image. The limits for the pixels that you may access may easily be checked. The loops are simple 2D loops over the image and the mask. It all boils down to the point where you access the 1D data with the 2D coordinates, as described above.
Here is an example. It applies a Sobel filter to the input image. (There may still be something odd with the pixel values, but the convolution itself and the index computations should be right)
import java.awt.Graphics2D;
import java.awt.GridLayout;
import java.awt.image.BufferedImage;
import java.awt.image.DataBuffer;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.SwingUtilities;
public class ConvolutionWithArrays1D
{
public static void main(String[] args) throws IOException
{
final BufferedImage image =
asGrayscaleImage(ImageIO.read(new File("lena512color.png")));
SwingUtilities.invokeLater(new Runnable()
{
#Override
public void run()
{
createAndShowGUI(image);
}
});
}
private static void createAndShowGUI(BufferedImage image0)
{
JFrame f = new JFrame();
f.getContentPane().setLayout(new GridLayout(1,2));
f.getContentPane().add(new JLabel(new ImageIcon(image0)));
BufferedImage image1 = compute(image0);
f.getContentPane().add(new JLabel(new ImageIcon(image1)));
f.pack();
f.setLocationRelativeTo(null);
f.setVisible(true);
}
private static BufferedImage asGrayscaleImage(BufferedImage image)
{
BufferedImage gray = new BufferedImage(
image.getWidth(), image.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics2D g = gray.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
return gray;
}
private static int[] obtainGrayscaleIntArray(BufferedImage image)
{
BufferedImage gray = new BufferedImage(
image.getWidth(), image.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics2D g = gray.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
DataBuffer dataBuffer = gray.getRaster().getDataBuffer();
DataBufferByte dataBufferByte = (DataBufferByte)dataBuffer;
byte data[] = dataBufferByte.getData();
int result[] = new int[data.length];
for (int i=0; i<data.length; i++)
{
result[i] = data[i];
}
return result;
}
private static BufferedImage createImageFromGrayscaleIntArray(
int array[], int imageSizeX, int imageSizeY)
{
BufferedImage gray = new BufferedImage(
imageSizeX, imageSizeY, BufferedImage.TYPE_BYTE_GRAY);
DataBuffer dataBuffer = gray.getRaster().getDataBuffer();
DataBufferByte dataBufferByte = (DataBufferByte)dataBuffer;
byte data[] = dataBufferByte.getData();
for (int i=0; i<data.length; i++)
{
data[i] = (byte)array[i];
}
return gray;
}
private static BufferedImage compute(BufferedImage image)
{
int imagePixels[] = obtainGrayscaleIntArray(image);
int mask[] =
{
1,0,-1,
2,0,-2,
1,0,-1,
};
int outputPixels[] =
Convolution.filter(imagePixels, image.getWidth(), mask, 3);
return createImageFromGrayscaleIntArray(
outputPixels, image.getWidth(), image.getHeight());
}
}
class Convolution
{
public static final int[] filter(
final int[] image, int imageSizeX,
final int[] mask, int maskSizeX)
{
int imageSizeY = image.length / imageSizeX;
int maskSizeY = mask.length / maskSizeX;
int output[] = new int[image.length];
for (int y=0; y<imageSizeY; y++)
{
for (int x=0; x<imageSizeX; x++)
{
int outputPixelValue = 0;
for (int my=0; my< maskSizeY; my++)
{
for (int mx=0; mx< maskSizeX; mx++)
{
int neighborX = x + mx -maskSizeX / 2;
int neighborY = y + my -maskSizeY / 2;
if (neighborX >= 0 && neighborX < imageSizeX &&
neighborY >= 0 && neighborY < imageSizeY)
{
int imageIndex =
neighborX + neighborY * imageSizeX;
int maskIndex = mx + my * maskSizeX;
int imagePixelValue = image[imageIndex];
int maskPixelValue = mask[maskIndex];
outputPixelValue +=
imagePixelValue * maskPixelValue;
}
}
}
outputPixelValue = truncate(outputPixelValue);
int outputIndex = x + y * imageSizeX;
output[outputIndex] = outputPixelValue;
}
}
return output;
}
private static final int truncate(final int pixelValue)
{
return Math.min(255, Math.max(0, pixelValue));
}
}

Error in writing a grayscale image

In the following code am trying to read a grayscale image, store the pixel values in a 2D array and rewrite the image with a different name.
The code is
package dct;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.awt.image.Raster;
import java.io.File;
import javax.imageio.ImageIO;
public class writeGrayScale
{
public static void main(String[] args)
{
File file = new File("lightning.jpg");
BufferedImage img = null;
try
{
img = ImageIO.read(file);
}
catch(Exception e)
{
e.printStackTrace();
}
int width = img.getWidth();
int height = img.getHeight();
int[][] arr = new int[width][height];
Raster raster = img.getData();
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
arr[i][j] = raster.getSample(i, j, 0);
}
}
BufferedImage image = new BufferedImage(256, 256, BufferedImage.TYPE_BYTE_GRAY);
byte[] raster1 = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(arr,0,raster1,0,raster1.length);
//
BufferedImage image1 = image;
try
{
File ouptut = new File("grayscale.jpg");
ImageIO.write(image1, "jpg", ouptut);
}
catch(Exception e)
{
e.printStackTrace();
}
}// main
}// class
For this code , the error is
Exception in thread "main" java.lang.ArrayStoreException
at java.lang.System.arraycopy(Native Method)
at dct.writeGrayScale.main(writeGrayScale.java:49)
Java Result: 1
How to remove this error to write the grayscale image?
I found this: "ArrayStoreException -- if an element in the src array could not be stored into the dest array because of a type mismatch." http://www.tutorialspoint.com/java/lang/system_arraycopy.htm
Two thoughts:
You're copying an int-array into a byte-array.
That's not part of the exceptions, but are the dimensions right? arr is a two-dimensional array, raster1 is a one-dimensional array.
And you can't just change the byte-array in a two-dimensional one ignoring the output of the method you're calling.
Change int[][] arr to byte[] arr like this.
byte[] arr = new byte[width * height * 4];
for (int i = 0, z = 0; i < width; i++) {
for (int j = 0; j < height; j++, z += 4) {
int v = getSample(i, j, 0);
for (int k = 3; k >= 0; --k) {
arr[z + k] = (byte)(v & 0xff);
v >>= 8;
}
}
}

assign RGB values in java with nested for function

I have some question, here is my code :
int W = img.getWidth();
int H = img.getHeight();
int [][] pixels = new int [W][H];
int [][][] rgb = new int [3][H][W];
for(int i=0;i<W;i++)
for(int j=0;j<H;j++){
pixels[i][j] = img.getRGB(i,j);
Color clr = new Color(pixels[i][j]);
rgb[0][j][i] = clr.getRed();
rgb[1][j][i] = clr.getGreen();
rgb[2][j][i] = clr.getBlue();
}
/*
pixels changing process
*/
//1st for
for(int[] asd : rgb[0])
System.out.println(Arrays.toString(asd));
//2nd for
for(int i=0;i<W;i++)
for(int j=0;j<H;j++){
/*Color myColor = new Color (rgb[0][i][j],rgb[1][i][j],rgb[2][i][j]);
int newrgb = myColor.getRGB();
img.setRGB(W,H,newrgb);*/
}
}
printing the red values with 1st works normally, but why I can't put that values using 2nd for ?
when I run the code, it issues ByteInterleavedRaster.setDataElements(int, int, Object) line: not available
I want to assign image colors with new values of rgb[0] (red), rgb[1] (green), rgb[2] (blue) that printed by using 1st for. When I expected it could work with 2nd for, it threw an error.
thanks in advance :)
I think this looks correct
int W = img.getWidth();
int H = img.getHeight();
int [][] pixels = new int [W][H];
int [][][] rgb = new int [3][W][H];
for(int i=0; i<W; i++)
for(int j=0; j<H; j++) {
pixels[i][j] = img.getRGB(i,j);
Color clr = new Color(pixels[i][j]);
rgb[0][i][j] = clr.getRed();
rgb[1][i][j] = clr.getGreen();
rgb[2][i][j] = clr.getBlue();
}
/*
pixels changing process
*/
for(int i=0; i<W; i++)
for(int j=0; j<H; j++){
Color myColor = new Color (rgb[0][i][j],
rgb[1][i][j],
rgb[2][i][j]);
int newrgb = myColor.getRGB();
img.setRGB(i, j, newrgb);
}
}
You have to change this line img.setRGB(W,H,newrgb) to this one img.setRGB(j,i,newrgb).

How convert Bitmap Object to Image Object and vice versa in android 'java'

How convert Image obj to Bitmap obj and vice versa?
I have a method that get Image object input and return Image object but i want give bitmap object input and then get bitmap object output my code is this:
public Image edgeFilter(Image imageIn) {
// Image size
int width = imageIn.getWidth();
int height = imageIn.getHeight();
boolean[][] mask = null;
Paint grayMatrix[] = new Paint[256];
// Init gray matrix
for (int i = 0; i <= 255; i++) {
Paint p = new Paint();
p.setColor(Color.rgb(i, i, i));
grayMatrix[i] = p;
}
int [][] luminance = new int[width][height];
for (int y = 0; y < height ; y++) {
for (int x = 0; x < width ; x++) {
if(mask != null && !mask[x][y]){
continue;
}
luminance[x][y] = (int) luminance(imageIn.getRComponent(x, y), imageIn.getGComponent(x, y), imageIn.getBComponent(x, y));
}
}
int grayX, grayY;
int magnitude;
for (int y = 1; y < height-1; y++) {
for (int x = 1; x < width-1; x++) {
if(mask != null && !mask[x][y]){
continue;
}
grayX = - luminance[x-1][y-1] + luminance[x-1][y-1+2] - 2* luminance[x-1+1][y-1] + 2* luminance[x-1+1][y-1+2] - luminance[x-1+2][y-1]+ luminance[x-1+2][y-1+2];
grayY = luminance[x-1][y-1] + 2* luminance[x-1][y-1+1] + luminance[x-1][y-1+2] - luminance[x-1+2][y-1] - 2* luminance[x-1+2][y-1+1] - luminance[x-1+2][y-1+2];
// Magnitudes sum
magnitude = 255 - Image.SAFECOLOR(Math.abs(grayX) + Math.abs(grayY));
Paint grayscaleColor = grayMatrix[magnitude];
// Apply the color into a new image
imageIn.setPixelColor(x, y, grayscaleColor.getColor());
}
}
return imageIn;
}
If you want to convert an Image object to a Bitmap and the format has been selected as JPEG, then you can accomplish this by using the following code (if it is not a JPEG, then additional conversions will be needed):
...
if(image.getFormat() == ImageFormat.JPEG)
{
ByteBuffer buffer = capturedImage.getPlanes()[0].getBuffer();
byte[] jpegByteData = new byte[buffer.remaining()];
Bitmap bitmapImage = BitmapFactory.decodeByteArray(jpegByteData, 0, jpegByteData.length, null);
}
...
This link gives more into on saving images as a png format.
it is difficult to see what you are attempting to do, are you trying to alter this code so it also works for bitmap formats?
here is a answer of someone doing stuff with bitmap images, should be give you a idea of what other people do

convert color image to grayscale

I want to convert a color image to grayscale.First I getting the data of color image and change this data but when I want to create a gary image from this data I have a error like this...
getData() maethod:
public double[] getData(BufferedImage a){
double[] data2=new double[h*w];
int red=0,green=0,blue=0;
int counter=0;
for(int w1=0;w1<w;w1++){
for(int h1=0;h1<h;h1++){
int rgb=a.getRGB(w1,h1);
red=(rgb)>> 16;
green=(rgb)>>8;
blue=(rgb);
data2[counter]=(red+green+blue)/3;
counter++;
}
}
return data2;
}
createimage() method:
void createImage(){
double[] data1=getData(colorImage);
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_BYTE_GRAY);
WritableRaster raster = image.getRaster();
int counter = 0 ;
for(int i = 0; i<h; i++){
for(int j = 0; j<w; j++){
raster.setSample(i, j, 0, data1[counter]);
counter++;
}
}
}
as it looks for me, the i and j in the setSample call must be swapped. (or w and h in BufferedImage)

Categories

Resources