How draw images with hardware acceleration on all platforms? - java

I've created 256x256 image with red=x and alpha=y:
int[] pix = new int[256*256];
for (int i = 0; i < pix.length; ++i)
pix[i] = i << 16;
BufferedImage img1 = new BufferedImage(256, 256, BufferedImage.TYPE_INT_ARGB);
img1.setRGB(0, 0, 256, 256, pix, 0, 256);
and have copied to volatile image:
VolatileImage img2 = canvas.createVolatileImage(256, 256);
img2.createGraphics().drawImage(img1, 0, 0, null);
When i draw BufferedImage to canvas, it incorrectly process alpha and not accelerated (0.9msec for drawImage):
When i draw VolatileImage, it process alpha and accelerated (0.03ms):
But it under Windows. Under MacOS i see different result. For BufferedImage it do not process alpha, but accelerated, for VolatileImage it process alpha correctly, but NOT accelerated. (Or maybe both not accelerated, just software processing of BufferedImage much faster).
Question - how correctly draw image with correct alpha processing and with acceleration on all platforms.
ps. img2.setAccelerationPriority(1) has no any effect.
pps. i was wrong, it not correct alpha processing in second case, it just precalculated image with alpha applied to default background color of cavas.

System.getProperties().setProperty("sun.java2d.opengl", "true");
This setting resolved problem with acceleration, now there 0.004msec for drawImage
And there was no problem with alpha processing at all, i just draw same image with alpha 1000 times at same place for time testing and it caused such ugly picture

Related

How can I set pixels in Java BufferedImages using non-ARGB color spaces?

I'm writing an application that needs to work with 16-bit "5-5-5" RGB colors (that is, 5 bits for each color and one bit of padding). In order to handle these images, I am using the BufferedImage class provided by AWT. The BufferedImage class specifically allows for the usage of non-RGB color spaces by taking either a ColorModel object or a predefined image type constant - one of which is the 5-5-5 pixel format that I need.
My problem is this: the BufferedImage "setRGB()" method states in its description that color values provided are "assumed to be in the default RGB color model, TYPE_INT_ARGB, and default sRGB color space" (per the BufferedImage documentation page). No other method seems to accept values designed for different color spaces, either.
Is there a way to use my non-standard color space directly with BufferedImage, or would I have to rely on the class's internal color conversion mechanisms to handle all of my colors? (Or am I just misreading/misunderstanding something about how the class works?)
BufferedImage.TYPE_USHORT_555_RGB still uses a completely standard RGB color space (in fact, it uses sRGB), so I don't think a different color space is what you are looking for.
If you want to perform painting or other operations in Java, just use the normal methods like setRGB/getRGB() and createGraphics()/Grapics2D. Everything will be properly converted to and from the packed USHORT_555_RGB format for you.
For example:
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_USHORT_555_RGB);
// Do some custom painting
Graphics2D g = image.createGraphics();
g.drawImage(otherImage, 0, 0, null); // image type here does not matter
g.setColor(Color.ORANGE); // Color in sRGB, but does not matter
g.fillOval(0, 0, w, h);
g.dispose();
image.setRGB(0, h/2, w, 1, new int[w]); // Silly way to create a horizontal black line at the center of the image... Don't do this, use fillRect(0, h/2, 1, w)! ;-)
// image will still be USHORT_555_RGB *internally*
However, if you have pixel data in the USHORT_555_RGB format (ie. from an external library/api/service), it may be faster and more accurate to set these values directly to the raster/databuffer. Or if you need to pass the pixel values back to the same library/api/service.
For example, using the Raster:
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_USHORT_555_RGB);
// Some fictional API. It's assumed that data.length == w * h
short[] apiPixels = api.getPixelsUSHORT_555_RGB(w, h);
WritableRaster raster = image.getRaster();
// Set short values to image
raster.setDataElements(0, 0, w, h, apiPixels);
// Get short values from image
short[] pixels = (short[]) raster.getDataElements(0, 0, w, h, null); // TYPE_USHORT_555_RGB -> always short[]
api.setPixels(pixels, w, h); // Another fictional API
Or, alternatively, use the DataBuffer:
BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_USHORT_555_RGB);
// Some fictional API. It's assumed that data.length == w * h
short[] apiPixels = api.getPixelsUSHORT_555_RGB(w, h);
DataBufferUShort buffer = (DataBufferUShort) image.getRaster().getDataBuffer(); // TYPE_USHORT_555_RGB -> always DataBufferUShort
// Set short values to image
System.arraycopy(apiPixels, 0, buffer.getData(), 0, apiPixels.length);
// Get short values from image
api.setPixels(buffer.getData(), w, h);
In most cases it does not matter which method you use, but the first approach (using Raster only) may keep the image managed, which will make images display faster on screen from your Java process.
PS: If a different color space is really what you need (ie. the pixel array from the external library/api/service uses a different color space, and you need to view the pixels in this color space), you can create a BufferedImage in USHORT_555_RGB style with a custom color space like this:
// Either use one of the built-in color spaces, or load one from disk
ColorSpace colorSpace = ColorSpace.getInstance(ColorSpace.CS_LINEAR_RGB);
ColorSpace colorSpaceToo = new ICC_ColorSpace(ICC_Profile.getInstance(Files.newInputStream(new File("/path/to/custom_rgb_profile.icc").toPath())));
// Create a color model using your color space, TYPE_USHORT and 5/5/5 mask, no transparency
ColorModel colorModel = new DirectColorModel(colorSpace, 15, 0x7C00, 0x03E0, 0x001F, 0, false, DataBuffer.TYPE_USHORT);
// And finally, create an image from the color model and a compatible raster
BufferedImage imageToo = new BufferedImage(colorModel, colorModel.createCompatibleWritableRaster(w, h), colorModel.isAlphaPremultiplied(), null);
Just remember that as the Java2D graphics operations and setRGB/getRGB are still using sRGB, now all operations on your image will be converted back and forth between your color space and sRGB. Performance will not be as good.

Reduce the size of an image in java

I want to make sure that an image in my application is not more than 200x200 px and the image size is not more than 150 kB. For example, if the file size of image is more than 150 kB i need to make it 150 kB. The image can be of type jpeg,png etc.
I have the following code for resizing an image to a given width and height
private BufferedImage resize(BufferedImage img, int newW, int newH) {
int w = img.getWidth();
int h = img.getHeight();
BufferedImage dimg = new BufferedImage(newW, newH, img.getType());
Graphics2D g = dimg.createGraphics();
g.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);
g.drawImage(img, 0, 0, newW, newH, 0, 0, w, h, null);
g.dispose();
return dimg;
}
But im not sure how to go about reducing the file size to 150 kB. How to do that in java ?.Some example would be really appreciated.
Thank You
Just as an option - image magic - it also has some convenience wrappers for Java, so you can easily use it.
Does your question have any practical relevance or is it just theoretical?
A 200x200 pixel image with a colour depth of 24 bit will uncompressed require 117kB. If you use any reasonable JPEG encoder, it will also never exceed 150kB for such an image.
You can only rezise the image multiple times, to get below the determined file size.

Resizing an indexed image in Java without losing transparency

This is my function to resize images.
The quality is not photoshop but it's acceptable.
What's not acceptable is the behaviour on indexed png.
We expect that if we scale down an image with a 256 colors palette with a transparent index we would get a resized image with same transparency, but this it not the case.
So we did the resize on a new ARGB image and then we reduce it to 256 colors. The problem is how to "reintroduce" the transparent pixel index.
private static BufferedImage internalResize(BufferedImage source, int destWidth, int destHeight) {
int sourceWidth = source.getWidth();
int sourceHeight = source.getHeight();
double xScale = ((double) destWidth) / (double) sourceWidth;
double yScale = ((double) destHeight) / (double) sourceHeight;
Graphics2D g2d = null;
BufferedImage resizedImage = new BufferedImage(destWidth, destHeight, BufferedImage.TRANSLUCENT);
log.debug("resizing image to w:" + destWidth + " h:" + destHeight);
try {
g2d = resizedImage.createGraphics();
g2d.setRenderingHint(RenderingHints.KEY_ALPHA_INTERPOLATION, RenderingHints.VALUE_ALPHA_INTERPOLATION_QUALITY);
g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g2d.setRenderingHint(RenderingHints.KEY_COLOR_RENDERING, RenderingHints.VALUE_COLOR_RENDER_QUALITY);
g2d.setRenderingHint(RenderingHints.KEY_DITHERING, RenderingHints.VALUE_DITHER_ENABLE);
g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BICUBIC);
g2d.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY);
AffineTransform at = AffineTransform.getScaleInstance(xScale, yScale);
g2d.drawRenderedImage(source, at);
} finally {
if (g2d != null)
g2d.dispose();
}
//doesn't keep the transparency
if (source.getType() == BufferedImage.TYPE_BYTE_INDEXED) {
log.debug("reducing to color-indexed image");
BufferedImage indexedImage = new BufferedImage(destWidth, destHeight, BufferedImage.TYPE_BYTE_INDEXED);
try {
Graphics g = indexedImage.createGraphics();
g.drawImage(resizedImage, 0, 0, null);
} finally {
if (g != null)
g.dispose();
}
System.err.println("source" + ((IndexColorModel) source.getColorModel()).getTransparentPixel()
+ " " + ((IndexColorModel) indexedImage.getColorModel()).getTransparentPixel());
return indexedImage;
}
return resizedImage;
}
Try changing
BufferedImage indexedImage = new BufferedImage(destWidth, destHeight, BufferedImage.TYPE_BYTE_INDEXED);
to
BufferedImage indexedImage = new BufferedImage(destWidth, destHeight, BufferedImage.TYPE_BYTE_INDEXED, (IndexColorModel) source.getColorModel());
Even if that specifically doesn't help you (which it might not if the resizing, for whatever reason, changes what specific color values are indexed), the fact that you can create a new BufferedImage with a given IndexColorModel will probably be quite useful for you.
http://download.oracle.com/javase/6/docs/api/java/awt/image/BufferedImage.html#BufferedImage%28int,%20int,%20int,%20java.awt.image.IndexColorModel%29
EDIT: Just noticed that your resizedImage constructor should probably use BufferedImage.TYPE_INT_ARGB rather than BufferedImage.TRANSLUCENT. Not sure if that will change how it works, but BufferedImage.TRANSLUCENT isn't supposed to be passed to that form of the constructor. http://download.oracle.com/javase/1,5.0/docs/api/java/awt/image/BufferedImage.html#BufferedImage%28int,%20int,%20int%29
Anyway, maybe try something like this:
DirectColorModel resizedModel = (DirectColorModel) resizedImage.getColorModel();
int numPixels = resizedImage.getWidth() * resizedImage.getHeight();
byte[numPixels] reds;
byte[numPixels] blues;
byte[numPixels] greens;
byte[numPixels] alphas;
int curIndex = 0;
int curPixel;
for (int i = 0; i < resizedImage.getWidth(); i++)
{
for (int j = 0; j < resizedImage.getHeight(); j++)
{
curPixel = resizedImage.getRGB(i, j);
reds[curIndex] = resizedModel.getRed(curPixel);
blues[curIndex]= resizedModel.getBlue(curPixel);
greens[curIndex] = resizedModel.getGreen(curPixel);
alphas[curIndex] = resizedModel.getAlpha(curPixel);
curIndex++;
}
}
BufferedImage indexedImage = new BufferedImage(destWidth, destHeight, BufferedImage.TYPE_BYTE_INDEXED, new IndexColorModel(resizedModel.pixel_bits, numPixels, reds, blues, greens, alphas));
Don't know if this will actually work, though.
Indexed images with transparency are a hack. They only work under certain conditions and resizing isn't one of them.
An image with transparency doesn't just have fully opaque and fully transparent pixels. In particular at a irregularly shaped borders, there are many pixels with partial transparency. If you save it in a format with indexed colors where a single color is used for transparent pixels, you have to decide what color the background will have. All pixels with partial transparency are then blended between their color and the background color (according to their transparency) and become fully opaque. Only the fully transparent pixel are assigned the transparent pseudo color.
If such an image is displayed against a background with differnt color, an ugly border will become apparent. It's an artifact of the inadequate transparency handling.
When you resize the image, you introduce more artifacts. The color of a new pixels is usually blended from several neighboring pixels. If some are transparent and some are opaque, the result is a partially transparent pixel. When you save it, the partially transparent pixel is blended against the background color and becomes opaque. As a result, the opaque area (and the associated artifacts) grow with each resize (or most other image manipulations).
Whatever programming language or graphics library you use, the artifacts will grow and the result will become worse. I recommend you use a ARGB buffer and save the image as a non-indexed PNG file.

Why does this GIF end up as a black square when resizing with Java ImageIO

Java ImageIO correctly displays this black & white image http://www.jthink.net/jaikoz/scratch/black.gif but when I try and resize it using this code
public static BufferedImage resize2D(Image srcImage, int size)
{
int w = srcImage.getWidth(null);
int h = srcImage.getHeight(null);
// Determine the scaling required to get desired result.
float scaleW = (float) size / (float) w;
float scaleH = (float) size / (float) h;
MainWindow.logger.finest("Image Resizing to size:" + size + " w:" + w + ":h:" + h + ":scaleW:" + scaleW + ":scaleH" + scaleH);
//Create an image buffer in which to paint on, create as an opaque Rgb type image, it doesn't matter what type
//the original image is we want to convert to the best type for displaying on screen regardless
BufferedImage bi = new BufferedImage(size, size, BufferedImage.TYPE_INT_RGB);
// Set the scale.
AffineTransform tx = new AffineTransform();
tx.scale(scaleW, scaleH);
// Paint image.
Graphics2D g2d = bi.createGraphics();
g2d.setComposite(AlphaComposite.Src);
g2d.drawImage(srcImage, tx, null);
g2d.dispose();
return bi;
}
I just end up with a black image. Im trying to make the image smaller (a thumbnail) but even if I resize it larger for test purposes it still ends up as a black square.
Other images resize okay, anyone know what is the problem with the gif/and or Java Bug
Here is the string representation of the ColorModel of the linked image when loaded through ImageIO:
IndexColorModel: #pixelBits = 1 numComponents = 4 color space = java.awt.color.ICC_ColorSpace#1572e449 transparency = 2 transIndex = 1 has alpha = true isAlphaPre = false
If I understand this correctly, you have one bit per pixel, where a 0 bit is opaque black and a 1 bit is transparent. Your BufferedImage is initially all black, so drawing a mixture of black and transparent pixels onto it will have no effect.
Although you are using AlphaComposite.Src this will not help as the R/G/B values for the transparent palette entry read as zero (I am not sure whether this is encoded in the GIF or just the default in the JDK.)
You can work around it by:
Initializing the BufferedImage with all-white pixels
Using AlphaComposite.SrcOver
So the last part of your resize2D implementation would become:
// Paint image.
Graphics2D g2d = bi.createGraphics();
g2d.setColor(Color.WHITE);
g2d.fillRect(0, 0, size, size);
g2d.setComposite(AlphaComposite.SrcOver);
g2d.drawImage(srcImage, tx, null);
Try this:
BufferedImage bi = new BufferedImage(size, size, BufferedImage.TYPE_INT_ARGB);
That makes it work. Of course, the question is why..?

Java DirectColorModel vs. IndexColorModel when dealing with alphas

I have a BufferedImage that I get that has an IndexColorModel. I then wish to apply an AffineTransform with AffineTransformOP in order to create a transformed version of displayImage.
Here's a code snippet:
int type = isRGB() ? AffineTransformOp.TYPE_BILINEAR : AffineTransformOp.TYPE_NEAREST_NEIGHBOR;
AffineTransformOp op = new AffineTransformOp(atx, type);
displayImage = op.filter(displayImage, null);
I'm running this with many images, and from an earlier post I discovered that if I set the transform type to bilinear, then I was running out of memory because I was getting an image back with a DirectColorModel. However, this DirectColorModel had a correct alpha channel (when I drew the image an a green background after translating it, I could see green around the whole image). When I set the interpolation type to nearest neighbor, pixels above and to the left of the image appear black no matter what the background is. I'm assuming this means that the alpha is not getting set.
Can anyone tell me how to correctly set the alpha channel with an IndexColorModel, or change the AffineTransformOP parameters such that I get an IndexColorModel with the correct alpha?
Thanks!!
EDIT:
Here is the desired effect, with AffineTransformOp.TYPE_BINLINEAR:
Here is the effect that I'm seeing with AffineTransformOp.TYPE_NEAREST_NEIGHBOR:
The whole background is initially painted green for effect and in both cases the image is drawn at position (0, 0).
I'm not sure what effect you're trying to achieve, but I get expected results when I adjust the alpha before and/or after transforming. Typically, I start with setComposite(AlphaComposite.Clear) followed by fillRect(). If all else fails, you can filter() to a WritableRaster, and brute-force the result you want. Also, you might look at RenderingHints related to KEY_ALPHA_INTERPOLATION. Here's a toy I use to experiment with various combinations of alpha, mode and color.
I seem to recall seeing the effect your images show. Recalling that the AffineTransformOp may return an image with different co-ordinates, especially with rotation, I'm guessing the added "empty" space isn't getting initialized correctly. You can get a transparent border with the code below, which also makes rotation around the center somewhat more symmetric:
private BufferedImage getSquareImage(BufferedImage image) {
int w = image.getWidth();
int h = image.getHeight();
int max = Math.max(w, h);
BufferedImage square = new BufferedImage(
max, max, BufferedImage.TYPE_INT_ARGB);
Graphics2D g2d = square.createGraphics();
g2d.setRenderingHint(
RenderingHints.KEY_ANTIALIASING,
RenderingHints.VALUE_ANTIALIAS_ON);
g2d.setComposite(AlphaComposite.Clear);
g2d.fillRect(0, 0, max, max);
g2d.setComposite(AlphaComposite.Src);
g2d.drawImage(image, (max - w) / 2, (max - h) / 2, null);
g2d.dispose();
return square;
}

Categories

Resources