I'm attempting to encrypt the content of a file, in this case a png, and not the actual file itself. This way I can see the difference between ECB and CBC encrpytion (example photos here:.
My intuition: I'm unsure if this is the best or even correct approach but my logic is to take the pixel data from the png and store it into an array. Then take the array and convert to a byte array. This way I can encrypt it using either ecb or cbc and then simply reverse the process afterwards.
My attempted code: Is this conversion correct and if not, how would I correctly convert them? I suspect it's incorrect because somewhere in this conversion the rgb values are getting messed up and that's why the ecb implementation fails to draw an outline.
// 1. Store rgb values into array
int w = image.getWidth(); // width
int h = image.getHeight(); // height
int total_pixels = (h*w);
Color[] colors = new Color[total_pixels];
int i = 0;
for (int x = 0; x < w; x++) {
for (int y = 0; y < h; y++) {
colors[i] = new Color(image.getRGB(x, y));
i++;
} // end inner for-loop
} // end outer for-loop
// 2. Convert int array into byte array for encryption
ByteBuffer byteBuffer = ByteBuffer.allocate(colors.length * 4);
IntBuffer intBuffer = byteBuffer.asIntBuffer();
intBuffer.put(total_pixels); // This does not except colors as input and is the wrong variable but using it to show what's happening
byte[] toBeEnc = byteBuffer.array();
After encryption and reversing my process:
ECB: Incorrect output, should have a rough outline of the penguin like in the github link attached
CBC: This is actually correct given the nature of cbc encryption
Additional Code after encryption: I know this reversal is also probably incorrect but I figured if I can get the inital conversion correct, I will be able to fix this.
byte[] encBytes = cipher.doFinal(toBeEnc);//encrypted byte array
// 4. Convert byte array back to int array
IntBuffer intBuf = ByteBuffer.wrap(encBytes).order(ByteOrder.LITTLE_ENDIAN).asIntBuffer();
int[] encArray = new int[intBuf.remaining()];
intBuf.get(encArray);
// 5. Convert int array into file format
DataBuffer rgbData = new DataBufferInt(encArray, encArray.length);
WritableRaster raster = Raster.createPackedRaster(rgbData, w, h, w, new int[]{0xff0000, 0xff00, 0xff},null);
ColorModel colorModel = new DirectColorModel(24, 0xff0000, 0xff00, 0xff);
BufferedImage img = new BufferedImage(colorModel, raster, false, null);
String fileName = "C:\\Users\\Mark Case\\Pictures\\Saved Pictures\\tux-enc.png";
ImageIO.write(img, "png", new File(fileName));
Here is the correct solution to this problem with a well commented explanation:
public static byte[] convertImgData2DToByte(BufferedImage image) {
int width = image.getWidth(); // Width
int height = image.getHeight(); // Height
int[][] result = new int[height][width]; // 2D array initialization
/*
Nested for-loop that iterates through
every position in img and stores the pixel data from each
into the 2D int array
*/
for (int row = 0; row < height; row++) {
for (int col = 0; col < width; col++) {
result[row][col] = image.getRGB(col, row);
}
}
/*
Step 2. Convert the resulting 2D array of rgb values
into a byte array. This way it can encrypted.
*/
int sizeOfResult = 0; // Initialization
for (int i = 0; i < result.length; i++) { // Counts each row length to find total number of ints
sizeOfResult += result[i].length;
}
ByteBuffer byteBuffer = ByteBuffer.allocate(sizeOfResult * 4); // Memory allocation (each int requires 4 bytes)
IntBuffer intBuffer = byteBuffer.asIntBuffer(); // Holds same memory allocation as byteBuffer
for (int i = 0; i < result.length; i++) { // Loops through again to store every int into the intBuffer
intBuffer.put(result[i]);
}
byte[] buffer = byteBuffer.array(); // Final byte representation
return buffer; // Return value for ImageEncryption class
Related
I have MATLAB code that convert image to matrix array. Then it uses Zigzag reading operation in the conversion process from a multi-dimensional array to a one-dimensional array. Pseudo-code(MATLAB):
array[M][N]=read_image(Image)
array[1][MN]=zigzag(array[M][N])
I think I can use below code in JAVA to convert a Bitmap to byte array.
// convert from bitmap to byte array
public byte[] getBytesFromBitmap(Bitmap bitmap) {
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(CompressFormat.JPEG, 70, stream);
return stream.toByteArray();
}
But I'm not sure. So is this way equals to MATLAB code or I must use another way ?
Edit: I also found these:
public int[][] getMatrixOfImage(BufferedImage bufferedImage) {
int width = bufferedImage.getWidth(null);
int height = bufferedImage.getHeight(null);
int[][] pixels = new int[width][height];
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
pixels[i][j] = bufferedImage.getRGB(i, j);
}
}
return pixels;
}
and also found this:
width = bitmap.getWidth();
height = bitmap.getHeight();
int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer byteBuffer = ByteBuffer.allocate(size);
bitmap.copyPixelsToBuffer(byteBuffer);
byteArray = byteBuffer.array();
So which is the best way? Thanks anyway.
I've got a byte array storing 16-bit pixel data from an already-deconstructed DICOM file. What I need to do now is convert/export that pixel data somehow into a TIFF file format. I'm using the imageio-tiff-3.3.2.jar plugin to handle the tiff conversion/header data. But now I need to pack that image data array into a BufferedImage of the original image dimensions so it can be exported to TIFF. But it seems that BufferedImage doesn't support 16-bit images. Is there a way around this problem, such as an external library? Is there another way I can pack that image data into a TIFF image of the original DICOM dimensions? Keep in mind, this process has to be completely lossless. I've looked around and tried out some things for the last few days, but so far nothing has worked for me.
Let me know if you have any questions or if there's anything I can do to clear up any confusion.
EDIT: Intended and Current image
Given your input data of a raw byte array, containing unsigned 16 bit image data, here's two ways to create a BufferedImage.
The first one will be slower, as it involves copying the byte array into a short array. It will also need twice the amount of memory. The upside is that it creates a standard TYPE_USHORT_GRAY BufferedImage, which may be faster to display and may be more compatible.
private static BufferedImage createCopyUsingByteBuffer(int w, int h, byte[] rawBytes) {
short[] rawShorts = new short[rawBytes.length / 2];
ByteBuffer.wrap(rawBytes)
// .order(ByteOrder.LITTLE_ENDIAN) // Depending on the data's endianness
.asShortBuffer()
.get(rawShorts);
DataBuffer dataBuffer = new DataBufferUShort(rawShorts, rawShorts.length);
int stride = 1;
WritableRaster raster = Raster.createInterleavedRaster(dataBuffer, w, h, w * stride, stride, new int[] {0}, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
return new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
}
A variant that is much faster (previous version takes 4-5x more time) to create, but results in a TYPE_CUSTOM image, that might be slower to display (it does seem to perform reasonable though, in my tests). It's much faster, and uses very little extra memory, as it does no copying/conversion of the input data at creation time.
Instead, it uses a custom sample model, that has DataBuffer.TYPE_USHORT as transfer type, but uses DataBufferByte as data buffer.
private static BufferedImage createNoCopy(int w, int h, byte[] rawBytes) {
DataBuffer dataBuffer = new DataBufferByte(rawBytes, rawBytes.length);
int stride = 2;
SampleModel sampleModel = new MyComponentSampleModel(w, h, stride);
WritableRaster raster = Raster.createWritableRaster(sampleModel, dataBuffer, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
return new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
}
private static class MyComponentSampleModel extends ComponentSampleModel {
public MyComponentSampleModel(int w, int h, int stride) {
super(DataBuffer.TYPE_USHORT, w, h, stride, w * stride, new int[] {0});
}
#Override
public Object getDataElements(int x, int y, Object obj, DataBuffer data) {
if ((x < 0) || (y < 0) || (x >= width) || (y >= height)) {
throw new ArrayIndexOutOfBoundsException("Coordinate out of bounds!");
}
// Simplified, as we only support TYPE_USHORT
int numDataElems = getNumDataElements();
int pixelOffset = y * scanlineStride + x * pixelStride;
short[] sdata;
if (obj == null) {
sdata = new short[numDataElems];
}
else {
sdata = (short[]) obj;
}
for (int i = 0; i < numDataElems; i++) {
sdata[i] = (short) (data.getElem(0, pixelOffset) << 8 | data.getElem(0, pixelOffset + 1));
// If little endian, swap the element order, like this:
// sdata[i] = (short) (data.getElem(0, pixelOffset + 1) << 8 | data.getElem(0, pixelOffset));
}
return sdata;
}
}
If your image looks strange after this conversion, try flipping the endianness, as commented in the code.
And finally, some code to exercise the above:
public static void main(String[] args) {
int w = 1760;
int h = 2140;
byte[] rawBytes = new byte[w * h * 2]; // This will be your input array, 7532800 bytes
ShortBuffer buffer = ByteBuffer.wrap(rawBytes)
// .order(ByteOrder.LITTLE_ENDIAN) // Try swapping the byte order to see sharp edges
.asShortBuffer();
// Let's make a simple gradient, from black UL to white BR
int max = 65535; // Unsigned short max value
for (int y = 0; y < h; y++) {
double v = max * y / (double) h;
for (int x = 0; x < w; x++) {
buffer.put((short) Math.round((v + max * x / (double) w) / 2.0));
}
}
final BufferedImage image = createNoCopy(w, h, rawBytes);
// final BufferedImage image = createCopyUsingByteBuffer(w, h, rawBytes);
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Test");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.add(new JScrollPane(new JLabel(new ImageIcon(image))));
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
Here's what the output should look like (scaled down to 1/10th):
The easiest thing to do is to create a BufferedImage of type TYPE_USHORT_GRAY, which is type to use for 16 bits encoding.
public BufferedImage Convert(short[] array, final int width, final int height)
{
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_USHORT_GRAY) ;
short[] sb = ((DataBufferUShort) image.getRaster().getDataBuffer()).getData() ;
System.arraycopy(array, 0, sb, 0, array.length) ;
return image ;
}
Then you can use Java.imageio to save your image as a TIFF or a PNG. I think that the Twelve Monkey Project allows a better TIFF support for imageio, but you have to check first.
[EDIT] In your case because you deal with huge DICOM images that cannot be stored into a regular BufferedImage, you have to create your own type using the Unsafe class to allocated the DataBuffer.
Create a new class DataBufferLongShort that will allocate the needed array/DataBuffer using the Unsafe class. Then you can use Long indexes instead of Integer
Create a new class DataBuffer that extends the classical DataBuffer in order to add a type TYPE_LONG_USHORT
Then you can create the ColorModel with the new DataBuffer.
I am testing out the new Camera2 API, and I'm able to capture the camera preview in YUV_420_888 format. What I need to do next is to feed this data to a image processing library, which accepts a byte[] parameter.
I've found examples of converting YUV_420_888 to RGB and such, but I still need to convert the resulting Bitmap to byte[] through ByteArrayOutputStream, which after experimenting, is slowing down the app tremendously.
My question is, how do I convert YUV_420_888 to byte[] efficiently?
What is the actual format of the byte[] array the image processing library wants? Is it RGB? YUV planar? YUV semiplanar?
Assuming it's RGB, given that you reference converting YUV_420_888 to RGB, you can just modify that example to not create a Bitmap from the allocation - just use Allocation.copyTo with byte[] instead of Bitmap.
I've take a lot of time for looking a solution, so i found it, from answer of other guy on stackoverflow, i want share my customize code which has been optimized for loop numbers, it work with me, for YUV420 Image of camera2 API :D
public static byte[] imageToMat(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer0 = planes[0].getBuffer();
ByteBuffer buffer1 = planes[1].getBuffer();
ByteBuffer buffer2 = planes[2].getBuffer();
int offset = 0;
int width = image.getWidth();
int height = image.getHeight();
byte[] data = new byte[image.getWidth() * image.getHeight() * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8];
byte[] rowData1 = new byte[planes[1].getRowStride()];
byte[] rowData2 = new byte[planes[2].getRowStride()];
int bytesPerPixel = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
// loop via rows of u/v channels
int offsetY = 0;
int sizeY = width * height * bytesPerPixel;
int sizeUV = (width * height * bytesPerPixel) / 4;
for (int row = 0; row < height ; row++) {
// fill data for Y channel, two row
{
int length = bytesPerPixel * width;
buffer0.get(data, offsetY, length);
if ( height - row != 1)
buffer0.position(buffer0.position() + planes[0].getRowStride() - length);
offsetY += length;
}
if (row >= height/2)
continue;
{
int uvlength = planes[1].getRowStride();
if ( (height / 2 - row) == 1 ) {
uvlength = width / 2 - planes[1].getPixelStride() + 1;
}
buffer1.get(rowData1, 0, uvlength);
buffer2.get(rowData2, 0, uvlength);
// fill data for u/v channels
for (int col = 0; col < width / 2; ++col) {
// u channel
data[sizeY + (row * width)/2 + col] = rowData1[col * planes[1].getPixelStride()];
// v channel
data[sizeY + sizeUV + (row * width)/2 + col] = rowData2[col * planes[2].getPixelStride()];
}
}
}
return data;
}
I have a .bin file that has been created in a MATLAB code as uint16 and i need to read it in Java.
With the code below, I get a blurry image with a very bad grayscale and the length of the file seems to be the double of the amount of pixels. There seems to be some loss of information when reading the file this way. Is there a way to read .bin files other than inputstreams?
This is how I try to read the .bin file:
is = new FileInputStream(filename);
dis = new DataInputStream(is);
int[] buf = new int[length];
int[][] real = new int[x][y];
while (dis.available() > 0) {
buf[i] = dis.readShort();
}
int counter = 0;
for (int j = 0; j < x; j++) {
for (int k = 0; k < y; k++) {
real[j][k] = buf[counter];
counter++;
}
}
return real;
And this is from the part from the main class where the first class is called:
BinaryFile2 binary = new BinaryFile2();
int[][] image = binary.read("data001.bin", 1024, 2048);
BufferedImage theImage = new BufferedImage(1024, 2048,
BufferedImage.TYPE_BYTE_GRAY);
for (int y = 0; y < 2048; y++) {
for (int x = 0; x < 1024; x++) {
int value = image[x][y];
theImage.setRGB(x, y, value);
}
}
File outputfile = new File("saved.png");
ImageIO.write(theImage, "png", outputfile);
You are storing uint16 data in an int array, this may lead to loss/corruption of data.
Following post discusses similar issue:
Java read unsigned int, store, and write it back
To correctly read and display an image originally stored as uint16, it's best to use the BufferedImage.TYPE_USHORT_GRAY type. A Java short is 16 bit, and the DataBufferUShort is made for storing unsigned 16 bit values.
Try this:
InputStream is = ...;
DataInputStream data = new DataInputStream(is);
BufferedImage theImage = new BufferedImage(1024, 2048, BufferedImage.TYPE_USHORT_GRAY);
short[] pixels = ((DataBufferUShort) theImage.getRaster().getDataBuffer()).getData();
for (int i = 0; i < pixels.length; i++) {
pixels[i] = data.readShort(); // short value is signed, but DataBufferUShort will take care of the "unsigning"
}
// TODO: close() streams in a finally block
To convert the image further to an 8 bit image, you can create a new image and draw the original onto that:
BufferedImage otherImage = new BufferedImage(1024, 2048, BufferedImage.TYPE_BYTE_GRAY);
Graphics2D g = otherImage.createGraphics();
try {
g.drawImage(theImage, 0, 0, null);
}
finally {
g.dispose();
}
Now you can store otherImage as an eight bit grayscale PNG.
I have the following code to read a black-white picture in java.
imageg = ImageIO.read(new File(path));
BufferedImage bufferedImage = new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_USHORT_GRAY);
Graphics g = bufferedImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
int w = img.getWidth();
int h = img.getHeight();
int[][] array = new int[w][h];
for (int j = 0; j < w; j++) {
for (int k = 0; k < h; k++) {
array[j][k] = img.getRGB(j, k);
System.out.print(array[j][k]);
}
}
As you can see I have set the type of BufferedImage into TYPE_USHORT_GRAY and I expect that I see the numbers between 0 and 255 in the two D array mattrix. but I will see '-1' and another large integer. Can anyone highlight my mistake please?
As already mentioned in comments and answers, the mistake is using the getRGB() method which converts your pixel values to packed int format in default sRGB color space (TYPE_INT_ARGB). In this format, -1 is the same as ยด0xffffffff`, which means pure white.
To access your unsigned short pixel data directly, try:
int w = img.getWidth();
int h = img.getHeight();
DataBufferUShort buffer = (DataBufferUShort) img.getRaster().getDataBuffer(); // Safe cast as img is of type TYPE_USHORT_GRAY
// Conveniently, the buffer already contains the data array
short[] arrayUShort = buffer.getData();
// Access it like:
int grayPixel = arrayUShort[x + y * w] & 0xffff;
// ...or alternatively, if you like to re-arrange the data to a 2-dimensional array:
int[][] array = new int[w][h];
// Note: I switched the loop order to access pixels in more natural order
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
array[x][y] = buffer.getElem(x + y * w);
System.out.print(array[x][y]);
}
}
// Access it like:
grayPixel = array[x][y];
PS: It's probably still a good idea to look at the second link provided by #blackSmith, for proper color to gray conversion. ;-)
A BufferedImage of type TYPE_USHORT_GRAY as its name says stores pixels using 16 bits (size of short is 16 bits). The range 0..255 is only 8 bits, so the colors may be well beyond 255.
And BufferedImage.getRGB() does not return these 16 pixel data bits but quoting from its javadoc:
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace.
getRGB() will always return the pixel in RGB format regardless of the type of the BufferedImage.