I'm trying to rotate a PNG image on a canvas and the quality of the image becomes very bad after rotation. Initially the PNG is an arrow on a transparent background. After rotation it is impossible to tell that it is an arrow.
I use following code:
Transform oldTransform = new Transform(
Display.getCurrent());
gc.getTransform(oldTransform);
Transform transform = new Transform(Display.getCurrent());
transform.translate(xm + imageBounds.width / 2, ym + imageBounds.height / 2);
transform.rotate(179);
transform.translate(-xm - imageBounds.width / 2, -ym - imageBounds.height / 2);
gc.setTransform(transform);
gc.drawImage(image, xm, ym);
gc.setTransform(oldTransform);
transform.dispose();
Thank you in advance.
Instead of rotating the image 180 degrees, you could flip it horizontally and vertically (without any pixel transformation):
private BufferedImage flipH(BufferedImage src) {
int w = src.getWidth();
int h = src.getHeight();
BufferedImage dst = new BufferedImage(w, h, src.getType());
Graphics2D g = dst.createGraphics();
g.drawImage(src,
0, // x of first corner (destination)
0, // y of first corner (destination)
w, // x of second corner (destination)
h, // y of second corner (destination)
w, // x of first corner (source)
0, // y of first corner (source)
0, // x of second corner (source)
h, // y of second corner (source)
null);
g.dispose();
return dst;
}
private BufferedImage flipV(BufferedImage src) {
int w = src.getWidth();
int h = src.getHeight();
BufferedImage dst = new BufferedImage(w, h, src.getType());
Graphics2D g = dst.createGraphics();
g.drawImage(src, 0, 0, w, h, 0, h, w, 0, null);
g.dispose();
return dst;
}
...
BufferedImage flipped = flipH(flipV(ImageIO.read(new File("test.png"))));
ImageIcon icon = new ImageIcon(flipped);
...
Edit: or even better, flip both horizontally and vertically in single op (same as rotating 180 degrees):
g.drawImage(src, 0, 0, w, h, w, h, 0, 0, null);
Edit2: There is also SWT-specific example of image rotation/flipping without Transform too.
Related
I'm working with Java to store and modify .jpg images, (not Android or Swing). I want to transform an image with a new dimension keeping the aspect ratio and filling the background with white color if the new dimension is not proportional to the original image.
BufferedImage image = /*Here i read from disk and load the image */;
image = resizeImage(image, newWidth, newHeight);
ImageIO.write(image, extension, new File(filename + "_" + page + "." + extension));
The function I'm trying to implement is resizeImage: in the example resizes the image but it doesn't keep the aspect ratio.
private static BufferedImage resizeImage(BufferedImage originalImage, int width, int height) {
BufferedImage resizedImage = new BufferedImage(width, height, originalImage.getType());
Graphics2D g = resizedImage.createGraphics();
g.drawImage(originalImage, 0, 0, width, height, null);
g.dispose();
return resizedImage;
}
I think a picture will be more illustrative of what I'm asking for:
If the original image is 200x200 and is asked to resize to 400x300 the result should be a picture with white margin and the original picture resized inside. In this example should be 300x300.
The problem is not how to resize, it's how to fill the remaining image with white and the original resized image on the center of it.
This code worked for me:
private static BufferedImage resizeImage(BufferedImage originalImage, int newWidth, int newHeight) {
BufferedImage resizedImage = new BufferedImage(newWidth, newHeight, originalImage.getType());
Graphics2D graphics = resizedImage.createGraphics();
graphics.setColor(Color.WHITE);
// fill the entire picture with white
graphics.fillRect(0, 0, newWidth, newHeight);
int maxWidth = 0;
int maxHeight = 0;
// go along the width and height in increments of 1 until one value is equal to the specified resize value
while (maxWidth <= newWidth && maxHeight <= newHeight)
{
++maxWidth;
++maxHeight;
}
// calculate the x value with which the original image is centred
int centerX = (resizedImage.getWidth() - maxWidth) / 2;
// calculate the y value with which the original image is centred
int centerY = (resizedImage.getHeight() - maxHeight) / 2;
// draw the original image
graphics.drawImage(originalImage, centerX, centerY, maxWidth, maxHeight, null);
graphics.dispose();
return resizedImage;
}
Before:
After:
I want to draw a rectangle around BufferedImage so it will create a border like frame.
So I load 2 BufferedImage:
BufferedImage a = ImageIO.read(new File(aPath));
BufferedImage b = ImageIO.read(new File(bPath));
And send it for drawing:
private void drawImageBorder(BufferedImage imageWithoutBorder) {
Graphics2D graph = imageWithoutBorder.createGraphics();
graph.setColor(Color.BLACK);
//create a black Rectangle - 1px bigger the original image
graph.fill(new Rectangle(imageWithoutBorder.getMinX(), imageWithoutBorder.getMinY(), imageWithoutBorder.getWidth() + 1, imageWithoutBorder.getHeight() +1));
//draw the image inside it
graph.drawImage(imageWithoutBorder, 0, 0, null);
graph.dispose();
}
For some reason it does nothing, there are similer questions like drawing-filled-rectangle-over-a-bufferedimage but I could not finnd helpful answers.
Thanks.
Almost right, but for the enlarged size and positioning.
BufferedImage image = ImageIO.read(new File(imagePath));
int w = image.getWidth();
int h = Image.getHeight();
int border = 1;
BufferedImage framedImage = new BufferedImage(w + 2*border, h + 2*border, image.getType());
Graphics2D graph = framedImage.createGraphics();
graph.setColor(Color.BLACK);
graph.fill(new Rectangle(0, 0, w + 2*border, h + 2*border));
graph.drawImage(image, border, border, null);
graph.dispose();
Possible reason can be, that you don't persist the changes made to the image, for example writing them back into an image file with ImageIO.write.
I try to scale image to 50x50 px, but I got black color. I need to make black to white
after scaled
this my code:
BufferedImage imgs = urlToBufferImage("src//imgTest.jpg");
BufferedImage resizedImage = new BufferedImage(50, 50, imgs.getType());
Graphics2D g = resizedImage.createGraphics();
// g.setBackground(Color.WHITE);
// g.drawImage(imgs, 0, 0, 50, 50,Color.WHITE, null);
g.drawImage(imgs.getScaledInstance(50, -1, Image.SCALE_DEFAULT), 0, 0, this);
g.dispose();
This is pretty simple.
My approach would be not to create a new BufferedImage, but to do:
BufferedImage imgs = urlToBufferImage("src//imgTest.jpg");
Graphics g = imgs.createGraphics();
g.drawImage(imgs, x, y, 50, 50, null);
or instead of drawing the image inside of the bounds, you could do
Graphics2D g2d = imgs.createGraphics();
g2d.scale(0.5, 0.5);
g2d.drawImage(imgs, x, y, null);
I've got some code that initializes OpenGL to render to a java.awt.Canvas.
The problem is, I can't figure out how I can get the buffer of the canvas and turn it into a BufferedImage.
I've tried overriding getGraphics(), cloning the Raster, and replacing the CanvasPeer with a custom one.
I'm guessing OpenGL doesn't use java graphics in any way then, so how can I get OpenGL's buffer and convert it into a BufferedImage?
I am using LWJGL's code for setting parent:
Display.setParent(display_parent);
Display.create();
You need to copy data from OpenGL buffer. I was using this method:
FloatBuffer grabScreen(GL gl)
{
int w = SCREENWITDH;
int h = SCREENHEIGHT;
FloatBuffer bufor = FloatBuffer.allocate(w*h*4); // 4 = rgba
gl.glReadBuffer(GL.GL_FRONT);
gl.glReadPixels(0, 0, w, h, GL.GL_RGBA, GL.GL_FLOAT, bufor); //Copy the image to the array imageData
return bufor;
}
You need to use something similar according to your OpenGL wrapper. This is JOGL example.
And here for LWJGL wrapper:
private static synchronized byte[] grabScreen()
{
int w = screenWidth;
int h = screenHeight;
ByteBuffer bufor = BufferUtils.createByteBuffer(w * h * 3);
GL11.glReadPixels(0, 0, w, h, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, bufor); //Copy the image to the array imageData
byte[] byteimg = new byte[w * h * 3];
bufor.get(byteimg, 0, byteimg.length);
return byteimg;
}
EDIT
This may be useful also (it's not fully mine, should be tuned too):
BufferedImage toImage(byte[] data, int w, int h)
{
if (data.length == 0)
return null;
DataBuffer buffer = new DataBufferByte(data, w * h);
int pixelStride = 3; //assuming r, g, b, skip, r, g, b, skip...
int scanlineStride = 3 * w; //no extra padding
int[] bandOffsets = { 0, 1, 2 }; //r, g, b
WritableRaster raster = Raster.createInterleavedRaster(buffer, w, h, scanlineStride, pixelStride, bandOffsets,
null);
ColorSpace colorSpace = ColorSpace.getInstance(ColorSpace.CS_sRGB);
boolean hasAlpha = false;
boolean isAlphaPremultiplied = true;
int transparency = Transparency.TRANSLUCENT;
int transferType = DataBuffer.TYPE_BYTE;
ColorModel colorModel = new ComponentColorModel(colorSpace, hasAlpha, isAlphaPremultiplied, transparency,
transferType);
BufferedImage image = new BufferedImage(colorModel, raster, isAlphaPremultiplied, null);
AffineTransform flip;
AffineTransformOp op;
flip = AffineTransform.getScaleInstance(1, -1);
flip.translate(0, -image.getHeight());
op = new AffineTransformOp(flip, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
image = op.filter(image, null);
return image;
}
I don't think this is possible for your situation, and here's why:
LWJGL doesn't draw directly to the canvas (at least not in Windows). The canvas is only used to obtain a window handle to provide as the parent window to OpenGL. As such, the canvas is never directly drawn to. To capture the contents, you'll probably have to resort to a screen capture.
I have two BufferedImages I loaded in from pngs. The first contains an image, the second an alpha mask for the image.
I want to create a combined image from the two, by applying the alpha mask. My google-fu fails me.
I know how to load/save the images, I just need the bit where I go from two BufferedImages to one BufferedImage with the right alpha channel.
I'm too late with this answer, but maybe it is of use for someone anyway. This is a simpler and more efficient version of Michael Myers' method:
public void applyGrayscaleMaskToAlpha(BufferedImage image, BufferedImage mask)
{
int width = image.getWidth();
int height = image.getHeight();
int[] imagePixels = image.getRGB(0, 0, width, height, null, 0, width);
int[] maskPixels = mask.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < imagePixels.length; i++)
{
int color = imagePixels[i] & 0x00ffffff; // Mask preexisting alpha
int alpha = maskPixels[i] << 24; // Shift blue to alpha
imagePixels[i] = color | alpha;
}
image.setRGB(0, 0, width, height, imagePixels, 0, width);
}
It reads all the pixels into an array at the beginning, thus requiring only one for-loop. Also, it directly shifts the blue byte to the alpha (of the mask color), instead of first masking the red byte and then shifting it.
Like the other methods, it assumes both images have the same dimensions.
I played recently a bit with this stuff, to display an image over another one, and to fade an image to gray.
Also masking an image with a mask with transparency (my previous version of this message!).
I took my little test program and tweaked it a bit to get the wanted result.
Here are the relevant bits:
TestMask() throws IOException
{
m_images = new BufferedImage[3];
m_images[0] = ImageIO.read(new File("E:/Documents/images/map.png"));
m_images[1] = ImageIO.read(new File("E:/Documents/images/mapMask3.png"));
Image transpImg = TransformGrayToTransparency(m_images[1]);
m_images[2] = ApplyTransparency(m_images[0], transpImg);
}
private Image TransformGrayToTransparency(BufferedImage image)
{
ImageFilter filter = new RGBImageFilter()
{
public final int filterRGB(int x, int y, int rgb)
{
return (rgb << 8) & 0xFF000000;
}
};
ImageProducer ip = new FilteredImageSource(image.getSource(), filter);
return Toolkit.getDefaultToolkit().createImage(ip);
}
private BufferedImage ApplyTransparency(BufferedImage image, Image mask)
{
BufferedImage dest = new BufferedImage(
image.getWidth(), image.getHeight(),
BufferedImage.TYPE_INT_ARGB);
Graphics2D g2 = dest.createGraphics();
g2.drawImage(image, 0, 0, null);
AlphaComposite ac = AlphaComposite.getInstance(AlphaComposite.DST_IN, 1.0F);
g2.setComposite(ac);
g2.drawImage(mask, 0, 0, null);
g2.dispose();
return dest;
}
The remainder just display the images in a little Swing panel.
Note that the mask image is gray levels, black becoming full transparency, white becoming full opaque.
Although you have resolved your problem, I though I could share my take on it. It uses a slightly more Java-ish method, using standard classes to process/filter images.
Actually, my method uses a bit more memory (making an additional image) and I am not sure it is faster (measuring respective performances could be interesting), but it is slightly more abstract.
At least, you have choice! :-)
Your solution could be improved by fetching the RGB data more than one pixel at a time(see http://java.sun.com/javase/6/docs/api/java/awt/image/BufferedImage.html), and by not creating three Color objects on every iteration of the inner loop.
final int width = image.getWidth();
int[] imgData = new int[width];
int[] maskData = new int[width];
for (int y = 0; y < image.getHeight(); y++) {
// fetch a line of data from each image
image.getRGB(0, y, width, 1, imgData, 0, 1);
mask.getRGB(0, y, width, 1, maskData, 0, 1);
// apply the mask
for (int x = 0; x < width; x++) {
int color = imgData[x] & 0x00FFFFFF; // mask away any alpha present
int maskColor = (maskData[x] & 0x00FF0000) << 8; // shift red into alpha bits
color |= maskColor;
imgData[x] = color;
}
// replace the data
image.setRGB(0, y, width, 1, imgData, 0, 1);
}
For those who are using alpha in the original image.
I wrote this code in Koltin, the key point here is that if you have the alpha on your original image you need to multiply these channels.
Koltin Version:
val width = this.width
val imgData = IntArray(width)
val maskData = IntArray(width)
for(y in 0..(this.height - 1)) {
this.getRGB(0, y, width, 1, imgData, 0, 1)
mask.getRGB(0, y, width, 1, maskData, 0, 1)
for (x in 0..(this.width - 1)) {
val maskAlpha = (maskData[x] and 0x000000FF)/ 255f
val imageAlpha = ((imgData[x] shr 24) and 0x000000FF) / 255f
val rgb = imgData[x] and 0x00FFFFFF
val alpha = ((maskAlpha * imageAlpha) * 255).toInt() shl 24
imgData[x] = rgb or alpha
}
this.setRGB(0, y, width, 1, imgData, 0, 1)
}
Java version (just translated from Kotlin)
int width = image.getWidth();
int[] imgData = new int[width];
int[] maskData = new int[width];
for (int y = 0; y < image.getHeight(); y ++) {
image.getRGB(0, y, width, 1, imgData, 0, 1);
mask.getRGB(0, y, width, 1, maskData, 0, 1);
for (int x = 0; x < image.getWidth(); x ++) {
//Normalize (0 - 1)
float maskAlpha = (maskData[x] & 0x000000FF)/ 255f;
float imageAlpha = ((imgData[x] >> 24) & 0x000000FF) / 255f;
//Image without alpha channel
int rgb = imgData[x] & 0x00FFFFFF;
//Multiplied alpha
int alpha = ((int) ((maskAlpha * imageAlpha) * 255)) << 24;
//Add alpha to image
imgData[x] = rgb | alpha;
}
image.setRGB(0, y, width, 1, imgData, 0, 1);
}
Actually, I've figured it out. This is probably not a fast way of doing it, but it works:
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
Color c = new Color(image.getRGB(x, y));
Color maskC = new Color(mask.getRGB(x, y));
Color maskedColor = new Color(c.getRed(), c.getGreen(), c.getBlue(),
maskC.getRed());
resultImg.setRGB(x, y, maskedColor.getRGB());
}
}