I have a .bin file that has been created in a MATLAB code as uint16 and i need to read it in Java.
With the code below, I get a blurry image with a very bad grayscale and the length of the file seems to be the double of the amount of pixels. There seems to be some loss of information when reading the file this way. Is there a way to read .bin files other than inputstreams?
This is how I try to read the .bin file:
is = new FileInputStream(filename);
dis = new DataInputStream(is);
int[] buf = new int[length];
int[][] real = new int[x][y];
while (dis.available() > 0) {
buf[i] = dis.readShort();
}
int counter = 0;
for (int j = 0; j < x; j++) {
for (int k = 0; k < y; k++) {
real[j][k] = buf[counter];
counter++;
}
}
return real;
And this is from the part from the main class where the first class is called:
BinaryFile2 binary = new BinaryFile2();
int[][] image = binary.read("data001.bin", 1024, 2048);
BufferedImage theImage = new BufferedImage(1024, 2048,
BufferedImage.TYPE_BYTE_GRAY);
for (int y = 0; y < 2048; y++) {
for (int x = 0; x < 1024; x++) {
int value = image[x][y];
theImage.setRGB(x, y, value);
}
}
File outputfile = new File("saved.png");
ImageIO.write(theImage, "png", outputfile);
You are storing uint16 data in an int array, this may lead to loss/corruption of data.
Following post discusses similar issue:
Java read unsigned int, store, and write it back
To correctly read and display an image originally stored as uint16, it's best to use the BufferedImage.TYPE_USHORT_GRAY type. A Java short is 16 bit, and the DataBufferUShort is made for storing unsigned 16 bit values.
Try this:
InputStream is = ...;
DataInputStream data = new DataInputStream(is);
BufferedImage theImage = new BufferedImage(1024, 2048, BufferedImage.TYPE_USHORT_GRAY);
short[] pixels = ((DataBufferUShort) theImage.getRaster().getDataBuffer()).getData();
for (int i = 0; i < pixels.length; i++) {
pixels[i] = data.readShort(); // short value is signed, but DataBufferUShort will take care of the "unsigning"
}
// TODO: close() streams in a finally block
To convert the image further to an 8 bit image, you can create a new image and draw the original onto that:
BufferedImage otherImage = new BufferedImage(1024, 2048, BufferedImage.TYPE_BYTE_GRAY);
Graphics2D g = otherImage.createGraphics();
try {
g.drawImage(theImage, 0, 0, null);
}
finally {
g.dispose();
}
Now you can store otherImage as an eight bit grayscale PNG.
Related
I'm attempting to encrypt the content of a file, in this case a png, and not the actual file itself. This way I can see the difference between ECB and CBC encrpytion (example photos here:.
My intuition: I'm unsure if this is the best or even correct approach but my logic is to take the pixel data from the png and store it into an array. Then take the array and convert to a byte array. This way I can encrypt it using either ecb or cbc and then simply reverse the process afterwards.
My attempted code: Is this conversion correct and if not, how would I correctly convert them? I suspect it's incorrect because somewhere in this conversion the rgb values are getting messed up and that's why the ecb implementation fails to draw an outline.
// 1. Store rgb values into array
int w = image.getWidth(); // width
int h = image.getHeight(); // height
int total_pixels = (h*w);
Color[] colors = new Color[total_pixels];
int i = 0;
for (int x = 0; x < w; x++) {
for (int y = 0; y < h; y++) {
colors[i] = new Color(image.getRGB(x, y));
i++;
} // end inner for-loop
} // end outer for-loop
// 2. Convert int array into byte array for encryption
ByteBuffer byteBuffer = ByteBuffer.allocate(colors.length * 4);
IntBuffer intBuffer = byteBuffer.asIntBuffer();
intBuffer.put(total_pixels); // This does not except colors as input and is the wrong variable but using it to show what's happening
byte[] toBeEnc = byteBuffer.array();
After encryption and reversing my process:
ECB: Incorrect output, should have a rough outline of the penguin like in the github link attached
CBC: This is actually correct given the nature of cbc encryption
Additional Code after encryption: I know this reversal is also probably incorrect but I figured if I can get the inital conversion correct, I will be able to fix this.
byte[] encBytes = cipher.doFinal(toBeEnc);//encrypted byte array
// 4. Convert byte array back to int array
IntBuffer intBuf = ByteBuffer.wrap(encBytes).order(ByteOrder.LITTLE_ENDIAN).asIntBuffer();
int[] encArray = new int[intBuf.remaining()];
intBuf.get(encArray);
// 5. Convert int array into file format
DataBuffer rgbData = new DataBufferInt(encArray, encArray.length);
WritableRaster raster = Raster.createPackedRaster(rgbData, w, h, w, new int[]{0xff0000, 0xff00, 0xff},null);
ColorModel colorModel = new DirectColorModel(24, 0xff0000, 0xff00, 0xff);
BufferedImage img = new BufferedImage(colorModel, raster, false, null);
String fileName = "C:\\Users\\Mark Case\\Pictures\\Saved Pictures\\tux-enc.png";
ImageIO.write(img, "png", new File(fileName));
Here is the correct solution to this problem with a well commented explanation:
public static byte[] convertImgData2DToByte(BufferedImage image) {
int width = image.getWidth(); // Width
int height = image.getHeight(); // Height
int[][] result = new int[height][width]; // 2D array initialization
/*
Nested for-loop that iterates through
every position in img and stores the pixel data from each
into the 2D int array
*/
for (int row = 0; row < height; row++) {
for (int col = 0; col < width; col++) {
result[row][col] = image.getRGB(col, row);
}
}
/*
Step 2. Convert the resulting 2D array of rgb values
into a byte array. This way it can encrypted.
*/
int sizeOfResult = 0; // Initialization
for (int i = 0; i < result.length; i++) { // Counts each row length to find total number of ints
sizeOfResult += result[i].length;
}
ByteBuffer byteBuffer = ByteBuffer.allocate(sizeOfResult * 4); // Memory allocation (each int requires 4 bytes)
IntBuffer intBuffer = byteBuffer.asIntBuffer(); // Holds same memory allocation as byteBuffer
for (int i = 0; i < result.length; i++) { // Loops through again to store every int into the intBuffer
intBuffer.put(result[i]);
}
byte[] buffer = byteBuffer.array(); // Final byte representation
return buffer; // Return value for ImageEncryption class
I have MATLAB code that convert image to matrix array. Then it uses Zigzag reading operation in the conversion process from a multi-dimensional array to a one-dimensional array. Pseudo-code(MATLAB):
array[M][N]=read_image(Image)
array[1][MN]=zigzag(array[M][N])
I think I can use below code in JAVA to convert a Bitmap to byte array.
// convert from bitmap to byte array
public byte[] getBytesFromBitmap(Bitmap bitmap) {
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(CompressFormat.JPEG, 70, stream);
return stream.toByteArray();
}
But I'm not sure. So is this way equals to MATLAB code or I must use another way ?
Edit: I also found these:
public int[][] getMatrixOfImage(BufferedImage bufferedImage) {
int width = bufferedImage.getWidth(null);
int height = bufferedImage.getHeight(null);
int[][] pixels = new int[width][height];
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
pixels[i][j] = bufferedImage.getRGB(i, j);
}
}
return pixels;
}
and also found this:
width = bitmap.getWidth();
height = bitmap.getHeight();
int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer byteBuffer = ByteBuffer.allocate(size);
bitmap.copyPixelsToBuffer(byteBuffer);
byteArray = byteBuffer.array();
So which is the best way? Thanks anyway.
I am trying to save a depth map from Kinect v2 which should come out as grayscale but every time that I try to save it as JPG file using the type BufferedImage.TYPE_USHORT_GRAY literally nothing happens (No warning on screen or in the console).
I manage to save it if I use types BufferedImage.TYPE_USHORT_555_RGB or BufferedImage.TYPE_USHORT_565_RGB but instead of being grayscale it come as out blueish or greenish depth maps.
Find below the code sample:
short[] depth = myKinect.getDepthFrame();
int DHeight=424;
int DWidth = 512;
int dx=0;
int dy = 21;
BufferedImage bufferDepth = new BufferedImage(DWidth, DHeight, BufferedImage.TYPE_USHORT_GRAY);
try {
ImageIO.write(bufferDepth, "jpg", outputFileD);
} catch (IOException e) {
e.printStackTrace();
}
Is there anything I am doing wrong to save it in grayscale?
Thanks in advance
You have to assign your data (depth) to the BufferedImage (bufferDepth) first.
A simple way to do this is:
short[] depth = myKinect.getDepthFrame();
int DHeight = 424;
int DWidth = 512;
int dx = 0;
int dy = 21;
BufferedImage bufferDepth = new BufferedImage(DWidth, DHeight, BufferedImage.TYPE_USHORT_GRAY);
for (int j = 0; j < DHeight; j++) {
for (int i = 0; i < DWidth; i++) {
int index = i + j * DWidth;
short value = depth[index];
Color color = new Color(value, value, value);
bufferDepth.setRGB(i, j, color.getRGB());
}
}
try {
ImageIO.write(bufferDepth, "jpg", outputFileD);
} catch (IOException e) {
e.printStackTrace();
}
I have read my original image as BufferedImage in Java and then following some operations, I am trying to threshold my image to either high(255) or low(0) but when I save my image, actually I try to overwrite it with new values, the pixels value are not only 0 and 255, some neighbouring values appear, I don't understand why.
READING MY IMAGE
File input = new File("/../Screenshots/1.jpg");
BufferedImage image = ImageIO.read(input);
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
Color c = new Color(image.getRGB(i, j));
powerspectrum[i][j] = (int) ((c.getRed() * 0.299)
+ (c.getGreen() * 0.587) + (c.getBlue() * 0.114));
}
}
THRESHOLDING MY IMAGE
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
if (gradient[i][j] <= upperthreshold
&& gradient[i][j] >= lowerthreshold)
spaces[i][j] = 255;
else
spaces[i][j] = 0;
Color gradColor = new Color(spaces[i][j], spaces[i][j],
spaces[i][j]);
image.setRGB(i, j, gradColor.getRGB());
}
}
SAVING MY IMAGE
File gradoutput = new File("/../Screenshots/3_GradThresh.jpg");
ImageIO.write(image, "jpg", gradoutput);
I don't know how to cut off the other intensity values.
I suspect this is because JPG is a lossy format. When you are saving a JPG to disk, it is doing compression. Try working with bitmap to see if that removes these neighboring gray area values.
+1 with the jpg compression issue. In image processing, we use PNG (best compression format without loss) or TIFF (worst scenario).
Btw, the methods setRGB/getRGB have terrible performances. The fastest is to modify directly the DataBuffer, but you have to do it for each type of image encoding. An alternative solution (but slower) is to use the Raster. Then you don't have to worry about the encoding.
I am getting the int array from png image how I will convert this to bufferdimage or creating new PNG file ?
int[] pixel = new int[w1*h1];
int i = 0;
for (int xx = 0; xx < h1; xx++) {
for (int yy = 0; yy < w1; yy++) {
pixel[i] = img.getRGB(yy, xx);
i++;
}
}
If you have an array of integers which are packed RGB values, this is the java code to save it to a file:
int width = 100;
int height = 100;
int[] rgbs = buildRaster(width, height);
DataBuffer rgbData = new DataBufferInt(rgbs, rgbs.length);
WritableRaster raster = Raster.createPackedRaster(rgbData, width, height, width,
new int[]{0xff0000, 0xff00, 0xff},
null);
ColorModel colorModel = new DirectColorModel(24, 0xff0000, 0xff00, 0xff);
BufferedImage img = new BufferedImage(colorModel, raster, false, null);
String fname = "/tmp/whatI.png";
ImageIO.write(img, "png", new File(fname));
System.out.println("wrote to "+fname);
The reason for the arrays 0xff0000, 0xff00, 0xff is that the RGB bytes are packed with blue in the least significant byte. If you pack your ints different, alter that array.
You can rebuild the image manually, this is however a pretty expensive operation.
BufferedImage image = new BufferedImage(64, 64, BufferedImage.TYPE_INT_RGB);
Graphics g = image.getGraphics();
for(int i = 0; i < pixels.size(); i++)
{
g.setColor(new java.awt.Color(pixels.get(i).getRed(), pixels.get(i).getGreen(), pixels.get(i).getBlue()));
g.fillRect(pixels.get(i).getxPos(), pixels.get(i).getyPos(), 1, 1);
}
try
{
ImageIO.write(image, "PNG", new File("imageName.png"))
}
catch(IOException error)
{
error.printStackTrace();
}
I formatted your image array into an object, this is personal preference tho (of course you could us an int array with this model as well). Keep in mind that you can always add the alpha to there as well.
Try the ImageIO class, which can take a byte array representing pixel data to build an image object and then writing it out in a particular format.
try {
BufferedImage bufferedImage = ImageIO.read(new ByteArrayInputStream(yourBytes));
ImageIO.write(bufferedImage, "png", new File("out.png"));
} catch (IOException e) {
e.printStackTrace();
}