JavaFx Image to byte[] array (Closed) - java

I need a fast way to convert a JavaFX Image to an byte array.
The way with "BufferedImage bImage = SwingFXUtils.fromFXImage(i, null);" is to slow.
I thinks its better to not convert the Image first to the awt.BufferedImage.
So what I have so far is:
PixelReader pr = img.getPixelReader();
WritablePixelFormat<ByteBuffer> pixelformat = WritablePixelFormat.getByteBgraInstance();
int w = (int) img.getWidth();
int h = (int) img.getHeight();
int offset = 0;
int scanlineStride = w * 4;
byte[] buffer = new byte[w * h * 4];
pr.getPixels(0, 0, w, h, pixelformat, buffer, offset, scanlineStride);
But this is not working as excepted.
Seems like the byte[] is empty or so?

you can use toByteArray() method of IOUtils class from org.apache.commons package. see here

Related

How do I convert byte data into an image format that's usable within Java?

I'm currently making a Java application to take an image from a fingerprint scanner (ZK4500 model) and display it on the screen. The SDK code is pretty straight forward and I have everything working from the user's perspective. However, the only method the SDK has for drawing the image to the screen is by taking the buffered image data and writing it to a file. The file is then read into an icon data type and displayed in a JLabel.
The problem I'd like to solve is that the image buffer data is constantly written to the the hard drive and then read from the hard drive just to see what the finger print image looks like. I'd like to translate the image buffer data already in memory to be drawn to the screen... preferably in a JLabel object, but it can be in a different object if need be.
The following prepares the image data to be read from the scanner and then displayed in a JLabel...
private long device = 0;
private byte[] imageData = null; // image buffer data
private int imageWidth = 0;
private int imageHeight = 0;
private byte[] parameter = new byte[4];
private int[] size = new int[1];
device = FingerprintSensorEx.OpenDevice(0);
FingerprintSensorEx.GetParameters(device, 1, parameter, size);
imageWidth = byteArrayToInt(parameter); // (!) see next code snippet below
FingerprintSensorEx.GetParameters(device, 2, parameter, size);
imageHeight = byteArrayToInt(parameter); // (!) see next code snippet below
imageData = new byte[imageWidth * imageHeight]; // data size (284 x 369)
FingerprintSensorEx.AcquireFingerprintImage(device, imageData); // loads image buffer data
writeImageFile(imageData, imageWidth, imageHeight); // (!) see next code snippet below
imageDisplay.setIcon(new ImageIcon(ImageIO.read(new File("fingerprint.bmp")))); // jlabel object
The following is how the SDK writes the image data to a file...
private void writeImageFile(byte[] imageBuf, int nWidth, int nHeight) throws IOException {
java.io.FileOutputStream fos = new java.io.FileOutputStream("fingerprint.bmp");
java.io.DataOutputStream dos = new java.io.DataOutputStream(fos);
int w = (((nWidth + 3) / 4) * 4);
int bfType = 0x424d;
int bfSize = 54 + 1024 + w * nHeight;
int bfReserved1 = 0;
int bfReserved2 = 0;
int bfOffBits = 54 + 1024;
dos.writeShort(bfType);
dos.write(changeByte(bfSize), 0, 4);
dos.write(changeByte(bfReserved1), 0, 2);
dos.write(changeByte(bfReserved2), 0, 2);
dos.write(changeByte(bfOffBits), 0, 4);
int biSize = 40;
int biPlanes = 1;
int biBitcount = 8;
int biCompression = 0;
int biSizeImage = w * nHeight;
int biXPelsPerMeter = 0;
int biYPelsPerMeter = 0;
int biClrUsed = 0;
int biClrImportant = 0;
dos.write(changeByte(biSize), 0, 4);
dos.write(changeByte(nWidth), 0, 4);
dos.write(changeByte(nHeight), 0, 4);
dos.write(changeByte(biPlanes), 0, 2);
dos.write(changeByte(biBitcount), 0, 2);
dos.write(changeByte(biCompression), 0, 4);
dos.write(changeByte(biSizeImage), 0, 4);
dos.write(changeByte(biXPelsPerMeter), 0, 4);
dos.write(changeByte(biYPelsPerMeter), 0, 4);
dos.write(changeByte(biClrUsed), 0, 4);
dos.write(changeByte(biClrImportant), 0, 4);
for (int i = 0; i < 256; i++) {
dos.writeByte(i);
dos.writeByte(i);
dos.writeByte(i);
dos.writeByte(0);
}
byte[] filter = null;
if (w > nWidth) {
filter = new byte[w - nWidth];
}
for (int i = 0; i < nHeight; i++) {
dos.write(imageBuf, (nHeight - 1 - i) * nWidth, nWidth);
if (w > nWidth)
dos.write(filter, 0, w - nWidth);
}
dos.flush();
dos.close();
fos.close();
}
private int byteArrayToInt(byte[] bytes) {
int number = bytes[0] & 0xFF;
number |= ((bytes[1] << 8) & 0xFF00);
number |= ((bytes[2] << 16) & 0xFF0000);
number |= ((bytes[3] << 24) & 0xFF000000);
return number;
}
private byte[] intToByteArray(final int number) {
byte[] abyte = new byte[4];
abyte[0] = (byte) (0xff & number);
abyte[1] = (byte) ((0xff00 & number) >> 8);
abyte[2] = (byte) ((0xff0000 & number) >> 16);
abyte[3] = (byte) ((0xff000000 & number) >> 24);
return abyte;
}
private byte[] changeByte(int data) {
return intToByteArray(data);
}
I included how the image data is written to the file output stream in case there is some clue as to what the real format of the scanner's image buffer data is. GIMP tells me that the written file is an 8-bit grayscale gamma integer BMP.
I know practically nothing about Java so I hope someone can point me in the right direction from a beginner's perspective. I read that a BufferedImage is the best way to work with images in Java, but I just couldn't connect the dots with the byte data from the scanner. I tried things along the line of...
BufferedImage img = ImageIO.read(new ByteArrayInputStream(imageData));
imageDisplay.setIcon(new ImageIcon(img)); // jlabel object
...but it returned an error because the image was "null". I think the image data needs to be in an array format first? Maybe the code in how the SDK writes the BMP file helps solve that, but I'm just grasping at straws here.
The writeImageFile does seem correct to me, and writes a valid BMP file that ImageIO should handle fine. However, writing the data to disk, just to read it back in, is a waste of time (and disk storage)... Instead, I would just create a BufferedImage directly from the image data.
I don't have your SDK or device, so I'm assuming the image dimensions and arrays are correctly filled (I'm just filling it with a gradient in the example):
// Dimensions from your sample code
int imageWidth = 284;
int imageHeight = 369;
byte[] imageData = new byte[imageWidth * imageHeight];
simulateCapture(imageData, imageWidth, imageHeight);
// The important parts:
// 1: Creating a new image to hold 8 bit gray data
BufferedImage image = new BufferedImage(imageWidth, imageHeight, BufferedImage.TYPE_BYTE_GRAY);
// 2: Setting the image data from your array to the image
image.getRaster().setDataElements(0, 0, imageWidth, imageHeight, imageData);
// And just to prove that it works
System.out.println("image = " + image);
JOptionPane.showMessageDialog(null, new ImageIcon(image), "image", JOptionPane.INFORMATION_MESSAGE);
public void simluateCapture(byte[] imageData, int imageWidth, int imageHeight) {
// Filling the image data with a gradient from black upper-left to white lower-right
for (int y = 0; y < imageHeight; y++) {
for (int x = 0; x < imageWidth; x++) {
imageData[imageWidth * y + x] = (byte) (255 * y * x / (imageHeight * imageWidth));
}
}
}
Output:
image = BufferedImage#4923ab24: type = 10 ColorModel: #pixelBits = 8 numComponents = 1 color space = java.awt.color.ICC_ColorSpace#44c8afef transparency = 1 has alpha = false isAlphaPre = false ByteInterleavedRaster: width = 284 height = 369 #numDataElements 1 dataOff[0] = 0
Screenshot:

Efficiently extracting RGBA buffer from BufferedImage

I've been trying to load in bufferedImages in java as IntBuffers. However, one problem I've come across is getting the pixel data from an image with semi or complete transparency. Java only seems to allow you to get the RGB value, which in my case is a problem because any pixels that should be transparent are rendered completely opaque. After about a few hours of searching I came across this way of getting the RGBA values...
Color color = new Color(image.getRGB(x, y), true);
Although it does work, it can't possibly be the best way of doing this. Does anyone know of a more efficient way to complete the same task, one that does not require an instance of a color object for EVERY pixel. You can see how this would be bad if you're trying to load in a fairly large image. Here is my code just in case you need a reference...
public static IntBuffer getImageBuffer(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int[] pixels = new int[width * height];
for (int i = 0; i < pixels.length; i++) {
Color color = new Color(image.getRGB(i % width, i / width), true);
int a = color.getAlpha();
int r = color.getRed();
int g = color.getGreen();
int b = color.getBlue();
pixels[i] = a << 24 | b << 16 | g << 8 | r;
}
return BufferUtils.toIntBuffer(pixels);
}
public static IntBuffer toIntBuffer(int[] elements) {
IntBuffer buffer = ByteBuffer.allocateDirect(elements.length << 2).order(ByteOrder.nativeOrder()).asIntBuffer();
buffer.put(elements).flip();
return buffer;
}
*Edit: The bufferedImage passed into the parameter is loaded from the disk
Here's some old code I have that converts images to OpenGL for LWJGL. Since the byte order has to be swapped, it isn't useful (I think) to load the image as for example integers.
public static ByteBuffer decodePng( BufferedImage image )
throws IOException
{
int width = image.getWidth();
int height = image.getHeight();
// Load texture contents into a byte buffer
ByteBuffer buf = ByteBuffer.allocateDirect( 4 * width * height );
// decode image
// ARGB format to -> RGBA
for( int h = 0; h < height; h++ )
for( int w = 0; w < width; w++ ) {
int argb = image.getRGB( w, h );
buf.put( (byte) ( 0xFF & ( argb >> 16 ) ) );
buf.put( (byte) ( 0xFF & ( argb >> 8 ) ) );
buf.put( (byte) ( 0xFF & ( argb ) ) );
buf.put( (byte) ( 0xFF & ( argb >> 24 ) ) );
}
buf.flip();
return buf;
}
Example usage:
BufferedImage image = ImageIO.read( getClass().getResourceAsStream(heightMapFile) );
int height = image.getHeight();
int width = image.getWidth();
ByteBuffer buf = TextureUtils.decodePng(image);
If interested, I did a jvm port of gli that deals with these stuff so that you don't have to worry about.
An example of texture loading:
public static int createTexture(String filename) {
Texture texture = gli.load(filename);
if (texture.empty())
return 0;
gli_.gli.gl.setProfile(gl.Profile.GL33);
gl.Format format = gli_.gli.gl.translate(texture.getFormat(), texture.getSwizzles());
gl.Target target = gli_.gli.gl.translate(texture.getTarget());
assert (texture.getFormat().isCompressed() && target == gl.Target._2D);
IntBuffer textureName = intBufferBig(1);
glGenTextures(textureName);
glBindTexture(target.getI(), textureName.get(0));
glTexParameteri(target.getI(), GL12.GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(target.getI(), GL12.GL_TEXTURE_MAX_LEVEL, texture.levels() - 1);
IntBuffer swizzles = intBufferBig(4);
texture.getSwizzles().to(swizzles);
glTexParameteriv(target.getI(), GL33.GL_TEXTURE_SWIZZLE_RGBA, swizzles);
Vec3i extent = texture.extent(0);
glTexStorage2D(target.getI(), texture.levels(), format.getInternal().getI(), extent.x, extent.y);
for (int level = 0; level < texture.levels(); level++) {
extent = texture.extent(level);
glCompressedTexSubImage2D(
target.getI(), level, 0, 0, extent.x, extent.y,
format.getInternal().getI(), texture.data(0, 0, level));
}
return textureName.get(0);
}

Pixel data of a 16-bit DICOM image to BufferedImage

I've got a byte array storing 16-bit pixel data from an already-deconstructed DICOM file. What I need to do now is convert/export that pixel data somehow into a TIFF file format. I'm using the imageio-tiff-3.3.2.jar plugin to handle the tiff conversion/header data. But now I need to pack that image data array into a BufferedImage of the original image dimensions so it can be exported to TIFF. But it seems that BufferedImage doesn't support 16-bit images. Is there a way around this problem, such as an external library? Is there another way I can pack that image data into a TIFF image of the original DICOM dimensions? Keep in mind, this process has to be completely lossless. I've looked around and tried out some things for the last few days, but so far nothing has worked for me.
Let me know if you have any questions or if there's anything I can do to clear up any confusion.
EDIT: Intended and Current image
Given your input data of a raw byte array, containing unsigned 16 bit image data, here's two ways to create a BufferedImage.
The first one will be slower, as it involves copying the byte array into a short array. It will also need twice the amount of memory. The upside is that it creates a standard TYPE_USHORT_GRAY BufferedImage, which may be faster to display and may be more compatible.
private static BufferedImage createCopyUsingByteBuffer(int w, int h, byte[] rawBytes) {
short[] rawShorts = new short[rawBytes.length / 2];
ByteBuffer.wrap(rawBytes)
// .order(ByteOrder.LITTLE_ENDIAN) // Depending on the data's endianness
.asShortBuffer()
.get(rawShorts);
DataBuffer dataBuffer = new DataBufferUShort(rawShorts, rawShorts.length);
int stride = 1;
WritableRaster raster = Raster.createInterleavedRaster(dataBuffer, w, h, w * stride, stride, new int[] {0}, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
return new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
}
A variant that is much faster (previous version takes 4-5x more time) to create, but results in a TYPE_CUSTOM image, that might be slower to display (it does seem to perform reasonable though, in my tests). It's much faster, and uses very little extra memory, as it does no copying/conversion of the input data at creation time.
Instead, it uses a custom sample model, that has DataBuffer.TYPE_USHORT as transfer type, but uses DataBufferByte as data buffer.
private static BufferedImage createNoCopy(int w, int h, byte[] rawBytes) {
DataBuffer dataBuffer = new DataBufferByte(rawBytes, rawBytes.length);
int stride = 2;
SampleModel sampleModel = new MyComponentSampleModel(w, h, stride);
WritableRaster raster = Raster.createWritableRaster(sampleModel, dataBuffer, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
return new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
}
private static class MyComponentSampleModel extends ComponentSampleModel {
public MyComponentSampleModel(int w, int h, int stride) {
super(DataBuffer.TYPE_USHORT, w, h, stride, w * stride, new int[] {0});
}
#Override
public Object getDataElements(int x, int y, Object obj, DataBuffer data) {
if ((x < 0) || (y < 0) || (x >= width) || (y >= height)) {
throw new ArrayIndexOutOfBoundsException("Coordinate out of bounds!");
}
// Simplified, as we only support TYPE_USHORT
int numDataElems = getNumDataElements();
int pixelOffset = y * scanlineStride + x * pixelStride;
short[] sdata;
if (obj == null) {
sdata = new short[numDataElems];
}
else {
sdata = (short[]) obj;
}
for (int i = 0; i < numDataElems; i++) {
sdata[i] = (short) (data.getElem(0, pixelOffset) << 8 | data.getElem(0, pixelOffset + 1));
// If little endian, swap the element order, like this:
// sdata[i] = (short) (data.getElem(0, pixelOffset + 1) << 8 | data.getElem(0, pixelOffset));
}
return sdata;
}
}
If your image looks strange after this conversion, try flipping the endianness, as commented in the code.
And finally, some code to exercise the above:
public static void main(String[] args) {
int w = 1760;
int h = 2140;
byte[] rawBytes = new byte[w * h * 2]; // This will be your input array, 7532800 bytes
ShortBuffer buffer = ByteBuffer.wrap(rawBytes)
// .order(ByteOrder.LITTLE_ENDIAN) // Try swapping the byte order to see sharp edges
.asShortBuffer();
// Let's make a simple gradient, from black UL to white BR
int max = 65535; // Unsigned short max value
for (int y = 0; y < h; y++) {
double v = max * y / (double) h;
for (int x = 0; x < w; x++) {
buffer.put((short) Math.round((v + max * x / (double) w) / 2.0));
}
}
final BufferedImage image = createNoCopy(w, h, rawBytes);
// final BufferedImage image = createCopyUsingByteBuffer(w, h, rawBytes);
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Test");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.add(new JScrollPane(new JLabel(new ImageIcon(image))));
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
Here's what the output should look like (scaled down to 1/10th):
The easiest thing to do is to create a BufferedImage of type TYPE_USHORT_GRAY, which is type to use for 16 bits encoding.
public BufferedImage Convert(short[] array, final int width, final int height)
{
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_USHORT_GRAY) ;
short[] sb = ((DataBufferUShort) image.getRaster().getDataBuffer()).getData() ;
System.arraycopy(array, 0, sb, 0, array.length) ;
return image ;
}
Then you can use Java.imageio to save your image as a TIFF or a PNG. I think that the Twelve Monkey Project allows a better TIFF support for imageio, but you have to check first.
[EDIT] In your case because you deal with huge DICOM images that cannot be stored into a regular BufferedImage, you have to create your own type using the Unsafe class to allocated the DataBuffer.
Create a new class DataBufferLongShort that will allocate the needed array/DataBuffer using the Unsafe class. Then you can use Long indexes instead of Integer
Create a new class DataBuffer that extends the classical DataBuffer in order to add a type TYPE_LONG_USHORT
Then you can create the ColorModel with the new DataBuffer.

Convert YUV_420_888 to byte array

I am testing out the new Camera2 API, and I'm able to capture the camera preview in YUV_420_888 format. What I need to do next is to feed this data to a image processing library, which accepts a byte[] parameter.
I've found examples of converting YUV_420_888 to RGB and such, but I still need to convert the resulting Bitmap to byte[] through ByteArrayOutputStream, which after experimenting, is slowing down the app tremendously.
My question is, how do I convert YUV_420_888 to byte[] efficiently?
What is the actual format of the byte[] array the image processing library wants? Is it RGB? YUV planar? YUV semiplanar?
Assuming it's RGB, given that you reference converting YUV_420_888 to RGB, you can just modify that example to not create a Bitmap from the allocation - just use Allocation.copyTo with byte[] instead of Bitmap.
I've take a lot of time for looking a solution, so i found it, from answer of other guy on stackoverflow, i want share my customize code which has been optimized for loop numbers, it work with me, for YUV420 Image of camera2 API :D
public static byte[] imageToMat(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer0 = planes[0].getBuffer();
ByteBuffer buffer1 = planes[1].getBuffer();
ByteBuffer buffer2 = planes[2].getBuffer();
int offset = 0;
int width = image.getWidth();
int height = image.getHeight();
byte[] data = new byte[image.getWidth() * image.getHeight() * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8];
byte[] rowData1 = new byte[planes[1].getRowStride()];
byte[] rowData2 = new byte[planes[2].getRowStride()];
int bytesPerPixel = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
// loop via rows of u/v channels
int offsetY = 0;
int sizeY = width * height * bytesPerPixel;
int sizeUV = (width * height * bytesPerPixel) / 4;
for (int row = 0; row < height ; row++) {
// fill data for Y channel, two row
{
int length = bytesPerPixel * width;
buffer0.get(data, offsetY, length);
if ( height - row != 1)
buffer0.position(buffer0.position() + planes[0].getRowStride() - length);
offsetY += length;
}
if (row >= height/2)
continue;
{
int uvlength = planes[1].getRowStride();
if ( (height / 2 - row) == 1 ) {
uvlength = width / 2 - planes[1].getPixelStride() + 1;
}
buffer1.get(rowData1, 0, uvlength);
buffer2.get(rowData2, 0, uvlength);
// fill data for u/v channels
for (int col = 0; col < width / 2; ++col) {
// u channel
data[sizeY + (row * width)/2 + col] = rowData1[col * planes[1].getPixelStride()];
// v channel
data[sizeY + sizeUV + (row * width)/2 + col] = rowData2[col * planes[2].getPixelStride()];
}
}
}
return data;
}

How do I use PixelReader's getPixels() method?

How do I convert a javafx.scene.image.Image to a byte array in the format bgra?
I tried doing:
PixelReader pixelReader = img.getPixelReader();
int width = (int)img.getWidth();
int height = (int)img.getHeight();
byte[] buffer = new byte[width * height * 4];
pixelReader.getPixels(
0,
0,
width,
height,
PixelFormat.getByteBgraInstance(),
buffer,
0,
width
);
but it didn't work, my byte[] array buffer is still filled with zeros.
The scanlineStride i. e. width must be multiplied by 4, i. e.
PixelReader pixelReader = img.getPixelReader();
int width = (int)img.getWidth();
int height = (int)img.getHeight();
byte[] buffer = new byte[width * height * 4];
pixelReader.getPixels(
0,
0,
width,
height,
PixelFormat.getByteBgraInstance(),
buffer,
0,
width * 4
);

Categories

Resources