JavaFX: Fastest way to write pixels to PixelWriter - java

I'm looking for the fastest way to write pixels on javafx.scene.image.Image. Writing to BufferedImage's backing array is much faster. At least on the test image I made it took only ~20ms for BufferedImage, WritableImage on the other hand took ~100ms. I already tried SwingFXUtils but no luck.
Code for BufferedImage (faster):
BufferedImage bi = createCompatibleImage( width, height );
WritableRaster raster = bi.getRaster();
DataBufferInt dataBuffer = (DataBufferInt) raster.getDataBuffer();
System.arraycopy( pixels, 0, dataBuffer.getData(), 0, pixels.length );
Code for WritableImage (slower):
WritableImage wi = new WritableImage( width, height );
PixelWriter pw = wi.getPixelWriter();
WritablePixelFormat<IntBuffer> pf = WritablePixelFormat.getIntArgbInstance();
pw.setPixels( 0, 0, width, height, pf, pixels, 0, width );
Maybe there's a way to write to WritableImage's backing array too?

For the performance of the pixel writer it is absolutely crucial that you pick the right pixel format. You can check what the native pixel format is via
pw.getPixelFormat().getType()
On my Mac this is PixelFormat.Type.BYTE_BGRA_PRE. If your raw data conforms to this pixel format, then the transfer to the image should be pretty fast. Otherwise the pixel data has to be converted and that takes some time.

Related

How can I set pixels in Java BufferedImages using non-ARGB color spaces?

I'm writing an application that needs to work with 16-bit "5-5-5" RGB colors (that is, 5 bits for each color and one bit of padding). In order to handle these images, I am using the BufferedImage class provided by AWT. The BufferedImage class specifically allows for the usage of non-RGB color spaces by taking either a ColorModel object or a predefined image type constant - one of which is the 5-5-5 pixel format that I need.
My problem is this: the BufferedImage "setRGB()" method states in its description that color values provided are "assumed to be in the default RGB color model, TYPE_INT_ARGB, and default sRGB color space" (per the BufferedImage documentation page). No other method seems to accept values designed for different color spaces, either.
Is there a way to use my non-standard color space directly with BufferedImage, or would I have to rely on the class's internal color conversion mechanisms to handle all of my colors? (Or am I just misreading/misunderstanding something about how the class works?)
BufferedImage.TYPE_USHORT_555_RGB still uses a completely standard RGB color space (in fact, it uses sRGB), so I don't think a different color space is what you are looking for.
If you want to perform painting or other operations in Java, just use the normal methods like setRGB/getRGB() and createGraphics()/Grapics2D. Everything will be properly converted to and from the packed USHORT_555_RGB format for you.
For example:
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_USHORT_555_RGB);
// Do some custom painting
Graphics2D g = image.createGraphics();
g.drawImage(otherImage, 0, 0, null); // image type here does not matter
g.setColor(Color.ORANGE); // Color in sRGB, but does not matter
g.fillOval(0, 0, w, h);
g.dispose();
image.setRGB(0, h/2, w, 1, new int[w]); // Silly way to create a horizontal black line at the center of the image... Don't do this, use fillRect(0, h/2, 1, w)! ;-)
// image will still be USHORT_555_RGB *internally*
However, if you have pixel data in the USHORT_555_RGB format (ie. from an external library/api/service), it may be faster and more accurate to set these values directly to the raster/databuffer. Or if you need to pass the pixel values back to the same library/api/service.
For example, using the Raster:
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_USHORT_555_RGB);
// Some fictional API. It's assumed that data.length == w * h
short[] apiPixels = api.getPixelsUSHORT_555_RGB(w, h);
WritableRaster raster = image.getRaster();
// Set short values to image
raster.setDataElements(0, 0, w, h, apiPixels);
// Get short values from image
short[] pixels = (short[]) raster.getDataElements(0, 0, w, h, null); // TYPE_USHORT_555_RGB -> always short[]
api.setPixels(pixels, w, h); // Another fictional API
Or, alternatively, use the DataBuffer:
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_USHORT_555_RGB);
// Some fictional API. It's assumed that data.length == w * h
short[] apiPixels = api.getPixelsUSHORT_555_RGB(w, h);
DataBufferUShort buffer = (DataBufferUShort) image.getRaster().getDataBuffer(); // TYPE_USHORT_555_RGB -> always DataBufferUShort
// Set short values to image
System.arraycopy(apiPixels, 0, buffer.getData(), 0, apiPixels.length);
// Get short values from image
api.setPixels(buffer.getData(), w, h);
In most cases it does not matter which method you use, but the first approach (using Raster only) may keep the image managed, which will make images display faster on screen from your Java process.
PS: If a different color space is really what you need (ie. the pixel array from the external library/api/service uses a different color space, and you need to view the pixels in this color space), you can create a BufferedImage in USHORT_555_RGB style with a custom color space like this:
// Either use one of the built-in color spaces, or load one from disk
ColorSpace colorSpace = ColorSpace.getInstance(ColorSpace.CS_LINEAR_RGB);
ColorSpace colorSpaceToo = new ICC_ColorSpace(ICC_Profile.getInstance(Files.newInputStream(new File("/path/to/custom_rgb_profile.icc").toPath())));
// Create a color model using your color space, TYPE_USHORT and 5/5/5 mask, no transparency
ColorModel colorModel = new DirectColorModel(colorSpace, 15, 0x7C00, 0x03E0, 0x001F, 0, false, DataBuffer.TYPE_USHORT);
// And finally, create an image from the color model and a compatible raster
BufferedImage imageToo = new BufferedImage(colorModel, colorModel.createCompatibleWritableRaster(w, h), colorModel.isAlphaPremultiplied(), null);
Just remember that as the Java2D graphics operations and setRGB/getRGB are still using sRGB, now all operations on your image will be converted back and forth between your color space and sRGB. Performance will not be as good.

Is there a way to not stretch the Bitmap when using Matrix.polyToPoly() on a bitmap?

I'm trying to do perspective transformation on my bitmap with a given quadrilateral. However, the Matrix.polyToPoly function stretches not only the part of the image I want but also the pixels outside the given area, so that a huge image is formed in some edge cases which crashes my app because of OOM.
Is there any way to sort of drop the pixels outside of the said area to not be stretched?
Or are there any other possibilities to do a perspective transform which is more memory friendly?
I`m currenty doing it like this:
// Perspective Transformation
float[] destination = {0, 0,
width, 0,
width, height,
0, height};
Matrix post = new Matrix();
post.setPolyToPoly(boundingBox.toArray(), 0, destination, 0, 4);
Bitmap transformed = Bitmap.createBitmap(temp, 0, 0, cropWidth, cropHeight, post, true);
where cropWidth and cropHeight are the size of the bitmap (I cropped it to the edges of the quadrilateral to save memory) and temp is said cropped bitmap.
Thanks to pskink I got it working, here is a post with the code in it:
Distorting an image to a quadrangle fails in some cases on Android

Reduce the size of an image in java

I want to make sure that an image in my application is not more than 200x200 px and the image size is not more than 150 kB. For example, if the file size of image is more than 150 kB i need to make it 150 kB. The image can be of type jpeg,png etc.
I have the following code for resizing an image to a given width and height
private BufferedImage resize(BufferedImage img, int newW, int newH) {
int w = img.getWidth();
int h = img.getHeight();
BufferedImage dimg = new BufferedImage(newW, newH, img.getType());
Graphics2D g = dimg.createGraphics();
g.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);
g.drawImage(img, 0, 0, newW, newH, 0, 0, w, h, null);
g.dispose();
return dimg;
}
But im not sure how to go about reducing the file size to 150 kB. How to do that in java ?.Some example would be really appreciated.
Thank You
Just as an option - image magic - it also has some convenience wrappers for Java, so you can easily use it.
Does your question have any practical relevance or is it just theoretical?
A 200x200 pixel image with a colour depth of 24 bit will uncompressed require 117kB. If you use any reasonable JPEG encoder, it will also never exceed 150kB for such an image.
You can only rezise the image multiple times, to get below the determined file size.

Java: understanding Image Rasters

to my understanding the following code
int [] pixels = image.getRaster().getPixels(0, 0, width, height, (int[])null);
should generate an array which has exactly the size width x height, but in practice it seems to be much larger, why?
image may be a bufferedimage, toolkitimage or volatileimage.
it generates a array with
new int[numBands * w * h]; // The number of bands of the image data.
from the SampleModel from your Raster

Turn an array of pixels into an Image object with Java's ImageIO?

I'm currently turning an array of pixel values (originally created with a java.awt.image.PixelGrabber object) into an Image object using the following code:
public Image getImageFromArray(int[] pixels, int width, int height) {
MemoryImageSource mis = new MemoryImageSource(width, height, pixels, 0, width);
Toolkit tk = Toolkit.getDefaultToolkit();
return tk.createImage(mis);
}
Is it possible to achieve the same result using classes from the ImageIO package(s) so I don't have to use the AWT Toolkit?
Toolkit.getDefaultToolkit() does not seem to be 100% reliable and will sometimes throw an AWTError, whereas the ImageIO classes should always be available, which is why I'm interested in changing my method.
You can create the image without using ImageIO. Just create a BufferedImage using an image type matching the contents of the pixel array.
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
When working with the PixelGrabber, don't forget to extract the RGBA info from the pixel array before calling getImageFromArray. There's an example of this in the handlepixelmethod in the PixelGrabber javadoc. Once you do that, make sure the image type in the BufferedImage constructor to BufferedImage.TYPE_INT_ARGB.
Using the raster I got an ArrayIndexOutOfBoundsException even when I created the BufferedImage with TYPE_INT_ARGB. However, using the setRGB(...) method of BufferedImage worked for me.
JavaDoc on BufferedImage.getData() says: "a Raster that is a copy of the image data."
This code works for me but I doubt in it's efficiency:
// Получаем картинку из массива.
int[] pixels = new int[width*height];
// Рисуем диагональ.
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
if (i == j) {
pixels[j*width + i] = Color.RED.getRGB();
}
else {
pixels[j*width + i] = Color.BLUE.getRGB();
//pixels[j*width + i] = 0x00000000;
}
}
}
BufferedImage pixelImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
pixelImage.setRGB(0, 0, width, height, pixels, 0, width);
I've had good success using java.awt.Robot to grab a screen shot (or a segment of the screen), but to work with ImageIO, you'll need to store it in a BufferedImage instead of the memory image source. Then you can call one static method of ImageIO and save the file. Try something like:
// Capture whole screen
Rectangle region = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage capturedImage = new Robot().createScreenCapture(region);
// Save as PNG
File imageFile = new File("capturedImage.png");
ImageIO.write(capturedImage, "png", imageFile);
As this is one of the highest voted question tagged with ImageIO on SO, I think there's still room for a better solution, even if the question is old. :-)
Have a look at the BufferedImageFactory.java class from my open source imageio project at GitHub.
With it, you can simply write:
BufferedImage image = new BufferedImageFactory(image).getBufferedImage();
The other good thing is that this approach, as a worst case, has about the same performance (time) as the PixelGrabber-based examples already in this thread. For most of the common cases (typically JPEG), it's about twice as fast. In any case, it uses less memory.
As a side bonus, the color model and pixel layout of the original image is kept, instead of translated to int ARGB with default color model. This might save additional memory.
(PS: The factory also supports subsampling, region-of-interest and progress listeners if anyone's interested. :-)
I had the same problem of everyone else trying to apply the correct answer of this question, my int array actually get an OutOfboundException where i fixed it adding one more index because the length of the array has to be widht*height*3 after this i could not get the image so i fixed it setting the raster to the image
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
image.setData(raster);
return image;
}
And you can see the image if u show it on a label on a jframe like this
JFrame frame = new JFrame();
frame.getContentPane().setLayout(new FlowLayout());
frame.getContentPane().add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setVisible(true);
setting the image on the imageIcon().
Last advice you can try to change the Bufferedimage.TYPE_INT_ARGB to something else that matches the image you got the array from this type is very important i had an array of 0 and -1 so I used this type BufferedImage.TYPE_3BYTE_BGR

Categories

Resources