I have to make a program that reads a 24 bit BMP image(without ImageIO or any external library) and make it a 8 bit greyscale BMP image... I read that I must change the header of the image to make it a 8 bit, Source 1 and Source 2. So I read here that the BitCount bytes are at 29 and 30 of the Header and try to change them...
First I read my file and generate the byte vector like this
FileInputStream image= new FileInputStream(path);
byte[] bytesImage = new byte[image.available()];
image.read(bytesImage);
image.close();
Then I get the image header and copy it to a new vector
int width = byteToInt(bytesImage[18], bytesImage[19], bytesImage[20], bytesImage[21]);
int height = byteToInt(bytesImage[22], bytesImage[23], bytesImage[24], bytesImage[25]);
int header = byteToInt(bytesImage[14], bytesImage[15], bytesImage[16], bytesImage[17]) + 14; // Add 14 for the header
vecGrey = Arrays.copyOf(bytesImage, bytesImage.length);
Then I change the header info bytes to make it an 8 bit BMP like this:
byte[] values = intToByte(8);
vecGrey[28] = values[0]; // This is the index for the BitCount byte 1
vecGrey[29] = values[1]; // and this one is the index for the second one.
Okay now comes the problem, for some reason I can't write a file with the header in vecGrey if i try to write vecGrey with a diferent header as show here:
FileOutputStream aGrey = new FileOutputStream(name+ "-gray.bmp");
aGrey.write(vecGrey);
aGrey.close();
// This is a method that displays the resulting image in a frame...
makeInterface(name + "-gray.bmp");
I know that I must change values in the vecGrey, but this should work showing incorrect output(probably a non greyscale image or not an image at all). But when I try to read the file that I generate in the makeInterface() method I get a
javax.imageio.iioexception unable to read the image header
So I asume that the program is unable to read correctly the header, but I don't know why! If I change the BitCount value to 16 it still works, but to 1, 4 or 8 it doesn't work with the same error... I didn't upload my hole code because it's in spanish, but if needed I can translate it and edit here.
Thanks!
EDIT1: I'm only using 640x480 24-bit BMP images, so I don't need to check padding.
When changing a BMP from 24bit to 8 bit you have to change several other things in the header, first of all the size of the image changes (bytes 3-6), since you are dealing with an 8bit image there is one byte per pixel, therefore the new size should become
headerSize {Usually 54} +(numberOfColors*4){This is for the color table/pallette, I recommend setting this at 256}+width*height {The actual amount of pixels}
Next you must indicate where is the offset for the pixel data, which is right after the color table/pallete, this value is located in the bytes 11-14 and the new value should be:
headerSize +numberOfColors*4
Next you need to modify the BITMAPINFOHEADER which starts on byte 15, bytes 15-18 should contain the size of this second header which is usually 40, if you just want to convert to grayscale you can ignore and leave some bytes unmodified until you reach byte 29 and 30 where you modify the bitCount (like you already did), then in bytes 35-38 as far as I know you have to input the new image size we have already calculated, bytes 47-50 determines the number of colors in your color palette, since you are doing grayscale I'd recommend using 256 colors and I'll explain why in a bit. Bytes 51-54 contains the number of important colors, set it at 0 to indicate every color is important.
Next you need to add the color table/palette right next to the heeader. The reason why I recommend 256 colors is because the color pallette is written like so: [B,G,R,0] where BGR are Blue, Green and Red color values in RGB format and a constant 0 at the end, with 256 colors you can make a palette that writes RGB values were R=G=B which should yield a shade of gray. So, next to the header you must add this new series of bytes in an ascending order:
[0,0,0,0] [1,1,1,0] [2,2,2,0] [3,3,3,0] ... [255,255,255,0]
Note that 256 is the numberOfColors which you need to calculate the new size of the image because it's the number of "entries" in the color pallette.
Next you'll want to write your new pixel data after the table/pallette. Since you were given an image in 24 bits, you can extract the pixel matrix and obtain RGB values of each pixel, just remember that you have a byte array which has values from -128 to 127, you need to make sure that you are getting the int value, so if the intensity of any channel is < 0 then add 256 to it to get the int value, then you can apply an equation which gives you an intensity of gray:
Y' = 0.299R' + 0.587G' + 0.114B'
Where Y' is the intensity of gray, R G B are the intensities of Red, Green and Blue.
You can round the result of the equation and then write it as a byte to the imange, and do the same with every pixel in the original image.
When you are done, simply add the two reserved 0s at the end of the file and you should have a brand new 8bit grayscale image of a 24bit image.
Hope this helped.
sources: The one you provided and:
https://en.wikipedia.org/wiki/BMP_file_format
https://en.wikipedia.org/wiki/Grayscale
you should first see the hex format of both 24-bit BMP as well as gray scale BMP, then you should go step wise,
-read 24-bit bmp header
-read data after offset.
-write header of 8-bit gray scale image
-write the data into 8-bit gray scale image.
note: you have to convert rgb bits into 8-bit gray scale by adding rgb bits and divide them by 3.
Related
I am trying to implement a steganography.
I am reading an image "a.jpeg" and inserting a byte in it by changing its consecutive 7 bytes at the least significant bit starting from offset 50.
This is done successfully as when I print the bytes the last bits are changed accordingly.
Then I saved it as "ao.jpeg". But when I am reading the byte values from 50, they are not the same as the once i saved.
here's my code
public static void main(String[] args) throws IOException {
BufferedImage inputImage = ImageIO.read(new File("a.jpeg"));
int offset=50;
byte data = 7;
byte[] image = get_byte_data(inputImage);//function converts bufferedimage to byte array
//add data at end of each byte starting from offset
System.out.println("bytes altered are :");
for(int bit=7; bit>=0; --bit, ++offset)//for each bit of data
{
int b = (data >>> bit) & 1;
image[offset] = (byte)((image[offset] & 0xFE) | b );
String s1 = String.format("%8s", Integer.toBinaryString(image[offset] & 0xFF)).replace(' ', '0');
System.out.println(s1);
}
//write changed image to ao.jpeg
BufferedImage outputImage = ImageIO.read(new ByteArrayInputStream(image));
File outputfile = new File("ao.jpeg");
ImageIO.write(outputImage,"jpeg",outputfile);
//read data from ao.jpeg
System.out.println("bytes from encoded image are :");
byte result=0;
offset=50;
BufferedImage oImage = ImageIO.read(new File("aoc.jpeg"));
byte[] image1 = get_byte_data(oImage);//function converts bufferedimage to byte array
for(int i=0; i<8; i++, ++offset)
{
result = (byte)((result << 1) | (image1[offset] & 1));
String s1 = String.format("%8s", Integer.toBinaryString(image1[offset] & 0xFF)).replace(' ', '0');
System.out.println(s1);
}
System.out.println("recovered data is :");
System.out.print(result);
}
output sample :
data inserted is 7. If you notice the least significant bit of each byte it is forming 7.
But when i read it again it is random bytes.
bytes altered are :
00010100
00011100
00011010
00011110
00011110
00011101
00011011
00011101
bytes from encoded image are :
00011110
00011101
00011010
00011100
00011100
00100000
00100100
00101110
recovered data is :
64
As suggested by Konstantin V. Salikhov I tried different file format (gif) and it worked.
But is there any way i can use "jpeg"?
JPEG is a lossy storage mechanism. That means it is NOT required (or even desirable) that is represent every byte exactly as the original. In deed, that is the whole point, it sacrifices small imperfections in order to achieve large space savings. If you need byte-perfect storage you will have to find choose another format such as GIF, PNG, or some flavors of BMP.
As pointed out below, it is technically possible to create a lossless JPEG but it was a late addition, not fully supported and in particular Java does not natively support it. See this answer for more information.
Why your approach fails
As Salinkov suggested, the jpeg format causes data compression. To summarise what you did:
You loaded the image data in a byte array.
You modified some of the bytes.
You loaded the byte array as an image.
You saved this image in jpeg format, which causes recompression.
That's where your method falls apart. A lossless format will not produce these problems, which is why gif works. Or png, bmp, etc...
Can you use your method for jpeg?
Weeeeell, no, not really. First, we need to understand what kind of data a jpeg image holds.
The short answer is that individual bytes don't correspond to actual pixels in the image.
The long story is that you split the image in 8x8 arrays and take the DCT to obtain the frequency coefficients. After a quantisation step, many of them will become 0, especially the higher frequency coefficients (bottom right of the DCT array - see here). This is the lossy step of jpeg. You sacrifice higher frequency coefficients for some tolerable image distortion (loss of information). Now, what you save is the non-zero coefficients, based on their location in the matrix, i.e. (0, 0, -26), (0, 1, -3), etc. This can be further compressed with Huffman coding. By the way, changing any one frequency component affects all 64 pixels.
So how is jpeg steganography normally done? It mostly follows the jpeg encoding process:
Split image in 8x8 arrays.
DCT each 8x8 array and quantise the coefficients.
Now we have obtained the quantised DCT coefficients and we take a break from the jpeg encoding process.
Apply some steganographic algorithm by changing the value of the quantised coefficients.
Huffman compress the coefficients and continue with the rest of the jpeg encoding process with these modified DCT coefficients.
I would like to compute the Normalized Cross Correlation between two images using Java.
My program works fine but when I tried to verify my results in MATLAB, I unfortunately did not get the same results as I did with my Java implementation.
This is the code that I performed in MATLAB:
Img1 = rgb2gray(imread('image1.png'));
Img2 = rgb2gray(imread('image2.png'));
corr2(Img1, Img2);
This is part of my Java implementation. Some of the classes have been removed for better understanding:
I am not sure what is wrong with my Java implementation.
I also have one other question. In MATLAB, I had to convert the image to grayscale before using corr2. Do I need to do the same in Java?
The reason why it isn't the same is because you didn't account for the headers in the PNG file.
If you take a look at your Java code, you are reading in the image as a byte stream in your readImage method. For PNG, there are headers involved such as the size of the image and how many bits of colour per pixel there are. Not only are you grabbing image data (which by the way is compressed using a version of LZW so you're not even reading in raw image data), but you are also grabbing in extra information which is being collected in your correlation code.
What's confusing is that you are reading in the image fine using the BufferedImage type at the beginning of your correlation code to obtain the rows and columns. Why did you switch to using a byte stream in your readImage method?
As such, you need to change your readImage method to take in the BufferedImage object, or reread the data like you did in the correlation method in your readImage method. Once you do that, use the BufferedImage methods to access the RGB pixels. FWIW, if you are reading in the image as grayscale, then every channel should give you the same intensity so you can operate on one channel alone. Doesn't matter which.... but make sure that you're doing correlation on grayscale images. It is ambiguous when you go to colour, as there is currently no set standard on how to do this.
Using BufferedImage, you can use the getRGB method to obtain the pixel you want at column x and row y. x traverses from left to right, while y traverses from top to bottom. When you call getRGB it returns a single 32-bit integer in ARGB format. Each channel is 8 bits. As such, the first 8 bits (MSB) are the alpha value, the second 8 bits are for red, the third 8 bits are for green, and the final 8 are for blue. Depending on what channel you want, you need to bitshift and mask out the bits you don't need to get the value you want.
As an example:
int rgb = img.getRGB(x, y);
int alpha = rgb >> 24 & 0xFF;
int red = rgb >> 16 & 0xFF;
int green = rgb >> 8 & 0xFF;
int blue = rgb & 0xFF;
For the alpha value, you need to shift to the right by 24 bits to get it down to the LSB positions, then mask with 0xFF to obtain only the 8 bits that represent the alpha value. Similarly you would have do the same for the red, green and blue channels. Because correlation is rather ill-posed for colour images, let's convert the image to grayscale within your readImage method. As such, there is no need to convert the image before you run this method. We will do that within the method itself to save you some hassle.
If you take a look at how MATLAB performs rgb2gray, it performs a weighted sum, weighting the channels differently. The weights are defined by the SMPTE Rec. 601 standard (for those of you that want to figure out how I know this, you can take a look at the source of rgb2gray and read off the first row of their transformation matrix. These coefficients are essentially the definition of the 601 standard).
Previous versions of MATLAB simply added up all of the channels, divided by 3 and took the floor. I don't know which version of MATLAB you are using, but to be safe I'm going to use the most up to date conversion.
public static void readImage(BufferedImage img, int array[][], int nrows, int ncols) {
for (int i = 0; i < nrows; i++)
for (int j = 0; j < ncols; j++) {
int rgb = img.getRGB(j, i);
int red = rgb >> 16 & 0xFF;
int green = rgb >> 8 & 0xFF;
int blue = rgb & 0xFF;
array[i][j] = (int) (0.299*((double)red) + 0.587*((double)green) +
0.114*((double)blue) );
}
}
This should hopefully give you what you want!
I read about LSB insertion online, but it only introduces about how to insert bits to LSB, but it didn't describe how to extract the bits. This is the article I read about LSB insertion.
I understand the method they use below, but how do you extract the bits?
Here's an algorithm for getting the encrypted message:
Read image.
Iterate over pixels.
Decompose pixel into RGB values (one byte for R, one for G, one for B)
Take the LSB from red. If the LSB is in bit zero, you can AND the red value with a mask of 1 (bits 000000001). So, lsbValue = rvalue & 0x01. Place the lsbValue (it will only be one or zero) in the highest bit
Get the LSB from green. Place this in the next highest bit.
Get the LSB from blue. Place this in the next bit down.
Read the next pixel and decompose into RGB bytes.
Stuff the LSB of the color components into bit positions until you've filled a byte. This is the first byte of your encrypted mesage.
Continue iterating over pixels and their RGB values until you've processed all pixels.
Inspect the bytes you've decrypted. The actual message should be obvious. Anything beyond the encrypted message will just be noise, i.e, the LSB of the actual image pixels.
I've heard that the data in gray-scale images with 8-bits color depth is stored in the first 7 bits of a byte of each pixel and the last bit keep intact! So we can store some information using the last bit of all pixels, is it true?
If so, how the data could be interpreted in individual pixels? I mean there is no Red, Blue and Green! so what do those bits mean?
And How can I calculate the average value of all pixels of an image?
I prefer to use pure java classes not JAI or other third parties.
Update 1
BufferedImage image = ...; // loading image
image.getRGB(i, j);
getRGB method always return an int which is bigger than one byte!!!
What should I do?
My understanding is that 8-bits colour depth means there is 8-bits per pixel (i.e. one byte) and that Red, Gren and Blue are all this value. e.g. greyscale=192 means Red=192, Green=192, Blue=192. There is no 7 bits plus another 1 bit.
AFAIK, you can just use a normal average. However I would use long for the sum and make sure each byte is unsigned i.e. `b & 0xff
EDIT: If the grey scale is say 128 (or 0x80), I would expect the RGB to be 128,128,128 or 0x808080.
How could a 32bpp image ( ARGB ) could be converted to a 16bpp image ( ARGB ) using Java's libraries? For my curiosity, at pixel level, what does this conversion do? If I have an int value that holds the value of a pixel ( with all the channels ), how would that int be different after the conversion had happened?
A 32-bit AARRGGBB value converted to a 16-bit ARGB value would be something like this:
int argb = ((aarrggbb & 0x000000F0) >> 4)
| ((aarrggbb & 0x0000F000) >> 8)
| ((aarrggbb & 0x00F00000) >> 12)
| ((aarrggbb & 0xF0000000) >> 16);
It sticks everything in the lower 16 bits and leaves the upper 16 bits as 0.
For each channel, you lose the lower 4-bits of colour info, the upper ones being somewhat more important. The colours would be quantized to the nearest 4-bit equivalent value resulting in a visually unpleasant colour banding effect across the image.
Incidentally, 16-bit colour does not normally include an alpha component. Normally (Though not always) it breaks down as 5 bits for red, 6 bits for green (Since our eyes are most sensitive to green/blue colours) and 5 bits for blue.
This conversion would lose only 2 or 3 bits of information on each channel instead of 4, and would assume that the source pixel contained no alpha.
Read in the image and save it in the format you need. From http://www.exampledepot.com/egs/javax.imageio/Graphic2File.html
// Create an image to save
RenderedImage rendImage = myCreateImage();
// Write generated image to a file
try {
// Save as PNG
File file = new File("newimage.png");
ImageIO.write(rendImage, "png", file);
// Save as JPEG
file = new File("newimage.jpg");
ImageIO.write(rendImage, "jpg", file);
} catch (IOException e) {
}
See the output from javax.imageio.ImageIO.getWriterFormatNames() to locate the format you need.
The internal representation of each pixel does not change (except for loss of representation) when using 16 bpp, but the bytes stored on disk will.