I have read my original image as BufferedImage in Java and then following some operations, I am trying to threshold my image to either high(255) or low(0) but when I save my image, actually I try to overwrite it with new values, the pixels value are not only 0 and 255, some neighbouring values appear, I don't understand why.
READING MY IMAGE
File input = new File("/../Screenshots/1.jpg");
BufferedImage image = ImageIO.read(input);
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
Color c = new Color(image.getRGB(i, j));
powerspectrum[i][j] = (int) ((c.getRed() * 0.299)
+ (c.getGreen() * 0.587) + (c.getBlue() * 0.114));
}
}
THRESHOLDING MY IMAGE
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
if (gradient[i][j] <= upperthreshold
&& gradient[i][j] >= lowerthreshold)
spaces[i][j] = 255;
else
spaces[i][j] = 0;
Color gradColor = new Color(spaces[i][j], spaces[i][j],
spaces[i][j]);
image.setRGB(i, j, gradColor.getRGB());
}
}
SAVING MY IMAGE
File gradoutput = new File("/../Screenshots/3_GradThresh.jpg");
ImageIO.write(image, "jpg", gradoutput);
I don't know how to cut off the other intensity values.
I suspect this is because JPG is a lossy format. When you are saving a JPG to disk, it is doing compression. Try working with bitmap to see if that removes these neighboring gray area values.
+1 with the jpg compression issue. In image processing, we use PNG (best compression format without loss) or TIFF (worst scenario).
Btw, the methods setRGB/getRGB have terrible performances. The fastest is to modify directly the DataBuffer, but you have to do it for each type of image encoding. An alternative solution (but slower) is to use the Raster. Then you don't have to worry about the encoding.
Related
I am trying to take a certain section of a certain frame of a gif (we'll call this certain frame myFile1.gif), and replace that same area on the subsequent frames (myFile2.gif and so on) with the specified area from myFile1.gif.
Now, this code actually does modify the correct area on myFile2.gif and the subsequent frames, but for some reason, the specified area on myFile2.gif does not come out as a duplicate of that same area on myFile1.gif. Rather, the area comes out all "wonky". (It looks like it has inverted colors or something and it just looks very pixelated. You can also clearly see that modification was done on that specific area.) Why is this happening?
If it is relevant, the gif file is of the gif89a format, and myFile1.gif and myFile2.gif are both individual frames of the gif (not the whole animation in motion). Furthermore, myFile1.gif and myFile2.gif have the same dimensions (they are both 347 x 875).
File f = myFile1.gif;
BufferedImage img = ImageIO.read(f);
byte[] pixels1 = ((DataBufferByte)img.getRaster().getDataBuffer()).getData();
BufferedImage area = img.getSubimage(104, 361, 158, 85);
String filename = myFile2.gif;
File g = new File(filename);
temp = ImageIO.read(g);
byte[] pixels2 = ((DataBufferByte)temp.getRaster().getDataBuffer()).getData();
for(int j = 0; j < area.getHeight(); j++) {
for(int k = 0; k < area.getWidth(); k++) {
int index = (temp.getWidth() * (j + 361)) + (k + 104);
pixels2[index] = pixels1[index];
}
}
File outputfile = new File(filename);
ImageIO.write(temp, "gif", outputfile);
As we read a RGB image , do the shifting operations to get the R, G and B matrices separately ...Is it possible to read a gray scale image(JPEG) and directly manipulate its pixel values.And then rewrite the image ?
Ultimately I have to do the DCT operation on the gray scale image.
The code below will read the grayscale image to simple two dimensional array:
File file = new File("path/to/file");
BufferedImage img = ImageIO.read(file);
int width = img.getWidth();
int height = img.getHeight();
int[][] imgArr = new int[width][height];
Raster raster = img.getData();
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
imgArr[i][j] = raster.getSample(i, j, 0);
}
}
Note: The raster.getSample(...) method takes 3 arguments: x - the X coordinate of the pixel location, y - the Y coordinate of the pixel location, b - the band to return. In case of the grayscale images we should/may get only the 0 band.
I have a .bin file that has been created in a MATLAB code as uint16 and i need to read it in Java.
With the code below, I get a blurry image with a very bad grayscale and the length of the file seems to be the double of the amount of pixels. There seems to be some loss of information when reading the file this way. Is there a way to read .bin files other than inputstreams?
This is how I try to read the .bin file:
is = new FileInputStream(filename);
dis = new DataInputStream(is);
int[] buf = new int[length];
int[][] real = new int[x][y];
while (dis.available() > 0) {
buf[i] = dis.readShort();
}
int counter = 0;
for (int j = 0; j < x; j++) {
for (int k = 0; k < y; k++) {
real[j][k] = buf[counter];
counter++;
}
}
return real;
And this is from the part from the main class where the first class is called:
BinaryFile2 binary = new BinaryFile2();
int[][] image = binary.read("data001.bin", 1024, 2048);
BufferedImage theImage = new BufferedImage(1024, 2048,
BufferedImage.TYPE_BYTE_GRAY);
for (int y = 0; y < 2048; y++) {
for (int x = 0; x < 1024; x++) {
int value = image[x][y];
theImage.setRGB(x, y, value);
}
}
File outputfile = new File("saved.png");
ImageIO.write(theImage, "png", outputfile);
You are storing uint16 data in an int array, this may lead to loss/corruption of data.
Following post discusses similar issue:
Java read unsigned int, store, and write it back
To correctly read and display an image originally stored as uint16, it's best to use the BufferedImage.TYPE_USHORT_GRAY type. A Java short is 16 bit, and the DataBufferUShort is made for storing unsigned 16 bit values.
Try this:
InputStream is = ...;
DataInputStream data = new DataInputStream(is);
BufferedImage theImage = new BufferedImage(1024, 2048, BufferedImage.TYPE_USHORT_GRAY);
short[] pixels = ((DataBufferUShort) theImage.getRaster().getDataBuffer()).getData();
for (int i = 0; i < pixels.length; i++) {
pixels[i] = data.readShort(); // short value is signed, but DataBufferUShort will take care of the "unsigning"
}
// TODO: close() streams in a finally block
To convert the image further to an 8 bit image, you can create a new image and draw the original onto that:
BufferedImage otherImage = new BufferedImage(1024, 2048, BufferedImage.TYPE_BYTE_GRAY);
Graphics2D g = otherImage.createGraphics();
try {
g.drawImage(theImage, 0, 0, null);
}
finally {
g.dispose();
}
Now you can store otherImage as an eight bit grayscale PNG.
I have the following code to read a black-white picture in java.
imageg = ImageIO.read(new File(path));
BufferedImage bufferedImage = new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_USHORT_GRAY);
Graphics g = bufferedImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
int w = img.getWidth();
int h = img.getHeight();
int[][] array = new int[w][h];
for (int j = 0; j < w; j++) {
for (int k = 0; k < h; k++) {
array[j][k] = img.getRGB(j, k);
System.out.print(array[j][k]);
}
}
As you can see I have set the type of BufferedImage into TYPE_USHORT_GRAY and I expect that I see the numbers between 0 and 255 in the two D array mattrix. but I will see '-1' and another large integer. Can anyone highlight my mistake please?
As already mentioned in comments and answers, the mistake is using the getRGB() method which converts your pixel values to packed int format in default sRGB color space (TYPE_INT_ARGB). In this format, -1 is the same as ยด0xffffffff`, which means pure white.
To access your unsigned short pixel data directly, try:
int w = img.getWidth();
int h = img.getHeight();
DataBufferUShort buffer = (DataBufferUShort) img.getRaster().getDataBuffer(); // Safe cast as img is of type TYPE_USHORT_GRAY
// Conveniently, the buffer already contains the data array
short[] arrayUShort = buffer.getData();
// Access it like:
int grayPixel = arrayUShort[x + y * w] & 0xffff;
// ...or alternatively, if you like to re-arrange the data to a 2-dimensional array:
int[][] array = new int[w][h];
// Note: I switched the loop order to access pixels in more natural order
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
array[x][y] = buffer.getElem(x + y * w);
System.out.print(array[x][y]);
}
}
// Access it like:
grayPixel = array[x][y];
PS: It's probably still a good idea to look at the second link provided by #blackSmith, for proper color to gray conversion. ;-)
A BufferedImage of type TYPE_USHORT_GRAY as its name says stores pixels using 16 bits (size of short is 16 bits). The range 0..255 is only 8 bits, so the colors may be well beyond 255.
And BufferedImage.getRGB() does not return these 16 pixel data bits but quoting from its javadoc:
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace.
getRGB() will always return the pixel in RGB format regardless of the type of the BufferedImage.
I have a situation where I need to invert the alpha channel of a VolatileImage
My current implementation is the obvious, but very slow;
public BufferedImage invertImage(VolatileImage v) {
BufferedImage b = new BufferedImage(v.getWidth(), v.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
Graphics g = b.getGraphics();
g.drawImage(v, 0, 0, null);
for(int i = 0; i < b.getWidth(); i++) {
for(int(j = 0; j < b.getHeight(); j++) {
Color c = new Color(b.getRGB(i, j, true));
c = new Color(c.getRed(), c.getGreen(), c.getBlue(), 255 - c.getAlpha());
b.setRGB(i, j, c.getRGB());
}
}
return b;
}
This works fine, but is painfully slow. I have large images and need this to be fast. I have messed around with the AlphaComposite but to no avail - this is not really a composting problem as far as I understand.
Given that 255 - x is equivalent to x & 0xff for 0 <= x < 256, can I not do an en-masse XOR over the alpha channel somehow?
After a lot of googleing, I came across DataBuffer classes being used as maps into BufferedImages:
DataBufferByte buf = (DataBufferByte)b.getRaster().getDataBuffer();
byte[] values = buf.getData();
for(int i = 0; i < values.length; i += 4) values[i] = (byte)(values[i] ^ 0xff);
This inverts the values of the BufferedImage (you do not need to draw it back over, altering the array values alters the buffered image itself).
My tests show this method is about 20 times faster than jazzbassrob's improvement, which was about 1.5 times faster than my original method.
You should be able to speed it up by avoiding all the getters and the constructor inside the loop:
for(int i = 0; i < b.getWidth(); i++) {
for(int(j = 0; j < b.getHeight(); j++) {
b.setRGB(b.getRGB(i, j) ^ 0xFF000000);
}
}