So, I'm trying to the color of a specific pixel in an BufferedImage...
public void LoadImageLevel (BufferedImage image) {
int w = image.getWidth ();
int h = image.getHeight ();
System.out.println (w + " " + h);
for (int xx = 0; xx < h; xx++) {
for (int yy = 0; yy < w; yy++) {
int pixel = image.getRGB (xx, yy);
int red = (pixel >> 16) & 0xff;
int green = (pixel >> 8) & 0xff;
int blue = (pixel) & 0xff;
if (red == 255 && green == 255 && blue == 255) {
handler.addObject (new Block (xx * 32, yy * 32, ObjectID.Block, 32, 32));
}
}
}
}
And it's called from the Main class constructor:
ImageLoader imageLoader = new ImageLoader ();
level = imageLoader.loadImage ("/levels/level_test.png");
LoadImageLevel (level);
The BufferedImage is loaded from my BufferedImageLoader class:
import java.awt.image.BufferedImage;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ImageLoader {
private BufferedImage image;
public BufferedImage loadImage (String path) {
try {
image = ImageIO.read (getClass ().getResource (path));
} catch (IOException e) {
e.printStackTrace ();
}
return image;
}
}
When I run the project I get this error:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds!
at sun.awt.image.ByteInterleavedRaster.getDataElements(Unknown Source)
at java.awt.image.BufferedImage.getRGB(Unknown Source)
at com.main.index.Game.LoadImageLevel(Game.java:190)
at com.main.index.Game.<init>(Game.java:41)
at com.main.index.Game.main(Game.java:206)
Line 190 is the "int pixel = image.getRGB (xx, yy);", line 41 is where it's called in the constructor, and line 206 is the main method.
Thanks in advance! ^_^
Problem is here:
int pixel = image.getRGB (xx, yy);
It should be:
int pixel = image.getRGB (yy, xx);
Your xx goes from 0 to the height, instead of going from 0 to the width. Your yy goes from 0 to the width, instead of going from 0 to the height.
level = imageLoader.loadImage ("/levels/level_test.png");
The Image you are using should be less than total Width and Height of main Window . And In this case Where RGB values are taken A picture with size of 2^X where X = 1,2,3,4,5,6,7,8,9.... ..
Try this :
resize level_test.png to 512 by 512 pixels.
Above is solution for this as Array contains boundaries.
java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds!
at sun.awt.image.ByteInterleavedRaster.getDataElements(Unknown Source)
Related
I am trying to get every single color of every single pixel of an image.
My idea was following:
int[] pixels;
BufferedImage image;
image = ImageIO.read(this.getClass.getResources("image.png");
int[] pixels = ((DataBufferInt)image.getRaster().getDataBuffer()).getData();
Is that right? I can't even check what the "pixels" array contains, because i get following error:
java.awt.image.DataBufferByte cannot be cast to java.awt.image.DataBufferInt
I just would like to receive the color of every pixel in an array, how do i achieve that?
import java.io.*;
import java.awt.*;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
public class GetPixelColor {
public static void main(String args[]) throws IOException {
File file = new File("your_file.jpg");
BufferedImage image = ImageIO.read(file);
// Getting pixel color by position x and y
int clr = image.getRGB(x, y);
int red = (clr & 0x00ff0000) >> 16;
int green = (clr & 0x0000ff00) >> 8;
int blue = clr & 0x000000ff;
System.out.println("Red Color value = " + red);
System.out.println("Green Color value = " + green);
System.out.println("Blue Color value = " + blue);
}
}
of course you have to add a for loop for all pixels
The problem (also with the answer that was linked from the first answer) is that you hardly ever know what exact type your buffered image will be after reading it with ImageIO. It could contain a DataBufferByte or a DataBufferInt. You may deduce it in some cases via BufferedImage#getType(), but in the worst case, it has type TYPE_CUSTOM, and then you can only fall back to some instanceof tests.
However, you can convert your image into a BufferedImage that is guaranteed to have a DataBufferInt with ARGB values - namely with something like
public static BufferedImage convertToARGB(BufferedImage image)
{
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(),
BufferedImage.TYPE_INT_ARGB);
Graphics2D g = newImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
Otherwise, you can call image.getRGB(x,y), which may perform the required conversions on the fly.
BTW: Note that obtaining the data buffer of a BufferedImage may degrade painting performance, because the image can no longer be "managed" and kept in VRAM internally.
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
public class Main {
public static void main(String[] args) throws IOException {
BufferedImage bufferedImage = ImageIO.read(new File("norris.jpg"));
int height = bufferedImage.getHeight(), width = bufferedImage.getWidth();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int RGBA = bufferedImage.getRGB(x, y);
int alpha = (RGBA >> 24) & 255;
int red = (RGBA >> 16) & 255;
int green = (RGBA >> 8) & 255;
int blue = RGBA & 255;
}
}
}
}
Assume the buffered image represents an image with 8-bit RGBA color components packed into integer pixels, I search for "RGBA color space" on wikipedia and found following:
In the byte-order scheme, "RGBA" is understood to mean a byte R,
followed by a byte G, followed by a byte B, and followed by a byte A.
This scheme is commonly used for describing file formats or network
protocols, which are both byte-oriented.
With simple Bitwise and Bitshift you can get the value of each color and the alpha value of the pixel.
Very interesting is also the other order scheme of RGBA:
In the word-order scheme, "RGBA" is understood to represent a complete
32-bit word, where R is more significant than G, which is more
significant than B, which is more significant than A. This scheme can
be used to describe the memory layout on a particular system. Its
meaning varies depending on the endianness of the system.
byte[] pixels
not
int[] pixels
try this : Java - get pixel array from image
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ImageUtil {
public static Color[][] loadPixelsFromImage(File file) throws IOException {
BufferedImage image = ImageIO.read(file);
Color[][] colors = new Color[image.getWidth()][image.getHeight()];
for (int x = 0; x < image.getWidth(); x++) {
for (int y = 0; y < image.getHeight(); y++) {
colors[x][y] = new Color(image.getRGB(x, y));
}
}
return colors;
}
public static void main(String[] args) throws IOException {
Color[][] colors = loadPixelsFromImage(new File("image.png"));
System.out.println("Color[0][0] = " + colors[0][0]);
}
}
I know this has already been answered, but the answers given are a bit convoluted and could use improvement.
The simple idea is to just loop through every (x,y) pixel in the image, and get the color of that pixel.
BufferedImage image = MyImageLoader.getSomeImage();
for ( int x = 0; x < image.getWidth(); x++ ) {
for( int y = 0; y < image.getHeight(); y++ ) {
Color pixel = new Color( image.getRGB( x, y ) );
// Do something with pixel color here :)
}
}
You could then perhaps wrap this method in a class, and implement Java's Iterable API.
class IterableImage implements Iterable<Color> {
private BufferedImage image;
public IterableImage( BufferedImage image ) {
this.image = image;
}
#Override
public Iterator<Color> iterator() {
return new Itr();
}
private final class Itr implements Iterator<Color> {
private int x = 0, y = 0;
#Override
public boolean hasNext() {
return x < image.getWidth && y < image.getHeight();
}
#Override
public Color next() {
x += 1;
if ( x >= image.getWidth() ) {
x = 0;
y += 1;
}
return new Color( image.getRGB( x, y ) );
}
}
}
The usage of which might look something like the following
BufferedImage image = MyImageLoader.getSomeImage();
for ( Color color : new IterableImage( image ) ) {
// Do something with color here :)
}
I want to load a single large bitmap image from a file, run a function that manipulates individual pixels and then re save the bitmap.
File formats can be either PNG or BMP, and the manipulating function is something simple such as:
if r=200,g=200,b=200 then +20 on all values, else -100 on all values
The trick is being able to load a bitmap and being able to read each pixel line by line
Is there standard library machinery in Java that can handle this I/O?
(The bitmap will need to be several megapixels, I need to be able to handle millions of pixels)
Thanks to MadProgrammer I have an answer:
package image_test;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Image_test {
public static void main(String[] args) {
BufferedImage img = null;
try {
img = ImageIO.read(new File("test.bmp"));
} catch (IOException e) {
}
int height = img.getHeight();
int width = img.getWidth();
int amountPixel = 0;
int amountBlackPixel = 0;
int rgb;
int red;
int green;
int blue;
double percentPixel = 0;
System.out.println(height + " " + width + " " + img.getRGB(30, 30));
for (int h = 0; h<height; h++)
{
for (int w = 0; w<width; w++)
{
amountPixel++;
rgb = img.getRGB(w, h);
red = (rgb >> 16 ) & 0x000000FF;
green = (rgb >> 8 ) & 0x000000FF;
blue = (rgb) & 0x000000FF;
if (red == 0 && green == 0 && blue == 0)
{
amountBlackPixel ++;
}
}
}
percentPixel = (double)amountBlackPixel / (double)amountPixel;
System.out.println("amount pixel: "+amountPixel);
System.out.println("amount black pixel: "+amountBlackPixel);
System.out.println("amount pixel black percent: "+percentPixel);
}
}
In my 2D game I'm using graphic tools to create nice, smooth terrain represented by black color:
Simple algorithm written in java looks for black color every 15 pixels, creating following set of lines (gray):
As you can see, there's some places that are mapped very bad, some are pretty good. In other case it would be not necessary to sample every 15 pixels, eg. if terrain is flat.
What's the best way to covert this curve to set of points [lines], using as little points as possible?
Sampling every 15 pixels = 55 FPS, 10 pixels = 40 FPS
Following algorithm is doing that job, sampling from right to left, outputting pasteable into code array:
public void loadMapFile(String path) throws IOException {
File mapFile = new File(path);
image = ImageIO.read(mapFile);
boolean black;
System.out.print("{ ");
int[] lastPoint = {0, 0};
for (int x = image.getWidth()-1; x >= 0; x -= 15) {
for (int y = 0; y < image.getHeight(); y++) {
black = image.getRGB(x, y) == -16777216 ? true : false;
if (black) {
lastPoint[0] = x;
lastPoint[1] = y;
System.out.print("{" + (x) + ", " + (y) + "}, ");
break;
}
}
}
System.out.println("}");
}
Im developing on Android, using Java and AndEngine
This problem is nearly identical to the problem of digitization of a signal (such as sound), where the basic law is that the signal in the input that had the frequency too high for the sampling rate will not be reflected in the digitized output. So the concern is that if you check ever 30 pixels and then test the middle as bmorris591 suggests, you might miss that 7 pixel hole between the sampling points. This suggests that if there are 10 pixel features you cannot afford to miss, you need to do scanning every 5 pixels: your sample rate should be twice the highest frequency present in the signal.
One thing that can help improve your algorithm is a better y-dimension search. Currently you are searching for the intersection between sky and terrain linearly, but a binary search should be faster
int y = image.getHeight()/2; // Start searching from the middle of the image
int yIncr = y/2;
while (yIncr>0) {
if (image.getRGB(x, y) == -16777216) {
// We hit the terrain, to towards the sky
y-=yIncr;
} else {
// We hit the sky, go towards the terrain
y+=yIncr;
}
yIncr = yIncr/2;
}
// Make sure y is on the first terrain point: move y up or down a few pixels
// Only one of the following two loops will execute, and only one or two iterations max
while (image.getRGB(x, y) != -16777216) y++;
while (image.getRGB(x, y-1) == -16777216) y--;
Other optimizations are possible. If you know that your terrain has no cliffs, then you only need to search the window from lastY+maxDropoff to lastY-maxDropoff. Also, if your terrain can never be as tall as the entire bitmap, you don't need to search the top of the bitmap either. This should help to free some CPU cycles you can use for higher-resolution x-scanning of the terrain.
I propose to find border points which exists on the border between white and dark pixels. After that we can digitize those points. To do that, we should define DELTA which specify which point we should skip and which we should add to result list.
DELTA = 3, Number of points = 223
DELTA = 5, Number of points = 136
DELTA = 10, Number of points = 70
Below, I have put source code, which prints image and looking for points. I hope, you will be able to read it and find a way to solve your problem.
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Point;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import javax.imageio.ImageIO;
import javax.swing.JFrame;
import javax.swing.JPanel;
public class Program {
public static void main(String[] args) throws IOException {
BufferedImage image = ImageIO.read(new File("/home/michal/Desktop/FkXG1.png"));
PathFinder pathFinder = new PathFinder(10);
List<Point> borderPoints = pathFinder.findBorderPoints(image);
System.out.println(Arrays.toString(borderPoints.toArray()));
System.out.println(borderPoints.size());
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().add(new ImageBorderPanel(image, borderPoints));
frame.pack();
frame.setMinimumSize(new Dimension(image.getWidth(), image.getHeight()));
frame.setVisible(true);
}
}
class PathFinder {
private int maxDelta = 3;
public PathFinder(int delta) {
this.maxDelta = delta;
}
public List<Point> findBorderPoints(BufferedImage image) {
int width = image.getWidth();
int[][] imageInBytes = convertTo2DWithoutUsingGetRGB(image);
int[] borderPoints = findBorderPoints(width, imageInBytes);
List<Integer> indexes = dwindlePoints(width, borderPoints);
List<Point> points = new ArrayList<Point>(indexes.size());
for (Integer index : indexes) {
points.add(new Point(index, borderPoints[index]));
}
return points;
}
private List<Integer> dwindlePoints(int width, int[] borderPoints) {
List<Integer> indexes = new ArrayList<Integer>(width);
indexes.add(borderPoints[0]);
int delta = 0;
for (int index = 1; index < width; index++) {
delta += Math.abs(borderPoints[index - 1] - borderPoints[index]);
if (delta >= maxDelta) {
indexes.add(index);
delta = 0;
}
}
return indexes;
}
private int[] findBorderPoints(int width, int[][] imageInBytes) {
int[] borderPoints = new int[width];
int black = Color.BLACK.getRGB();
for (int y = 0; y < imageInBytes.length; y++) {
int maxX = imageInBytes[y].length;
for (int x = 0; x < maxX; x++) {
int color = imageInBytes[y][x];
if (color == black && borderPoints[x] == 0) {
borderPoints[x] = y;
}
}
}
return borderPoints;
}
private int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}
}
class ImageBorderPanel extends JPanel {
private static final long serialVersionUID = 1L;
private BufferedImage image;
private List<Point> borderPoints;
public ImageBorderPanel(BufferedImage image, List<Point> borderPoints) {
this.image = image;
this.borderPoints = borderPoints;
}
#Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0, 0, null);
Graphics2D graphics2d = (Graphics2D) g;
g.setColor(Color.YELLOW);
for (Point point : borderPoints) {
graphics2d.fillRect(point.x, point.y, 3, 3);
}
}
}
In my source code I have used example from this question:
Java - get pixel array from image
The most efficient solution (with respect to points required) would be to allow for variable spacing between points along the X axis. This way, a large flat part would require very few points/samples and complex terrains would use more.
In 3D mesh processing, there is a nice mesh simplification algorithm named "quadric edge collapse", which you can adapt to your problem.
Here is the idea, translated to your problem - it actually gets much simpler than the original 3D algorithm:
Represent your curve with way too many points.
For each point, measure the error (i.e. difference to the smooth terrain) if you remove it.
Remove the point that gives the smallest error.
Repeat until you have reduced the number of points far enough or errors get too large.
To be more precise regarding step 2: Given points P, Q, R, the error of Q is the difference between the approximation of your terrain by two straight lines, P->Q and Q->R, and the approximation of your terrain by just one line P->R.
Note that when a point is removed only its neighbors need an update of their error value.
What I'm trying to do is to compute 2D DCT of an image in Java and then save the result back to file.
Read file:
coverImage = readImg(coverPath);
private BufferedImage readImg(String path) {
BufferedImage destination = null;
try {
destination = ImageIO.read(new File(path));
} catch (IOException e) {
e.printStackTrace();
}
return destination;
}
Convert to float array:
cover = convertToFloatArray(coverImage);
private float[] convertToFloatArray(BufferedImage source) {
securedImage = (WritableRaster) source.getData();
float[] floatArray = new float[source.getHeight() * source.getWidth()];
floatArray = securedImage.getPixels(0, 0, source.getWidth(), source.getHeight(), floatArray);
return floatArray;
}
Run the DCT:
runDCT(cover, coverImage.getHeight(), coverImage.getWidth());
private void runDCT(float[] floatArray, int rows, int cols) {
dct = new FloatDCT_2D(rows, cols);
dct.forward(floatArray, false);
securedImage.setPixels(0, 0, cols, rows, floatArray);
}
And then save it as image:
convertDctToImage(securedImage, coverImage.getHeight(), coverImage.getWidth());
private void convertDctToImage(WritableRaster secured, int rows, int cols) {
coverImage.setData(secured);
File file = new File(securedPath);
try {
ImageIO.write(coverImage, "png", file);
} catch (IOException ex) {
Logger.getLogger(DCT2D.class.getName()).log(Level.SEVERE, null, ex);
}
}
But what I get is: http://kyle.pl/up/2012/05/29/dct_stack.png
Can anyone tell me what I'm doing wrong? Or maybe I don't understand something here?
This is a piece of code, that works for me:
//reading image
BufferedImage image = javax.imageio.ImageIO.read(new File(filename));
//width * 2, because DoubleFFT_2D needs 2x more space - for Real and Imaginary parts of complex numbers
double[][] brightness = new double[img.getHeight()][img.getWidth() * 2];
//convert colored image to grayscale (brightness of each pixel)
for ( int y = 0; y < image.getHeight(); y++ ) {
raster.getDataElements( 0, y, image.getWidth(), 1, dataElements );
for ( int x = 0; x < image.getWidth(); x++ ) {
//notice x and y swapped - it's JTransforms format of arrays
brightness[y][x] = brightnessRGB(dataElements[x]);
}
}
//do FT (not FFT, because FFT is only* for images with width and height being 2**N)
//DoubleFFT_2D writes data to the same array - to brightness
new DoubleFFT_2D(img.getHeight(), img.getWidth()).realForwardFull(brightness);
//visualising frequency domain
BufferedImage fd = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);
outRaster = fd.getRaster();
for ( int y = 0; y < img.getHeight(); y++ ) {
for ( int x = 0; x < img.getWidth(); x++ ) {
//we calculate complex number vector length (sqrt(Re**2 + Im**2)). But these lengths are to big to
//fit in 0 - 255 scale of colors. So I divide it on 223. Instead of "223", you may want to choose
//another factor, wich would make you frequency domain look best
int power = (int) (Math.sqrt(Math.pow(brightness[y][2 * x], 2) + Math.pow(brightness[y][2 * x + 1], 2)) / 223);
power = power > 255 ? 255 : power;
//draw a grayscale color on image "fd"
fd.setRGB(x, y, new Color(c, c, c).getRGB());
}
}
draw(fd);
Resulting image should look like big black space in the middle and white spots in all four corners. Usually people visualise FD so, that zero frequency appears in the center of the image. So, if you need classical FD (one, that looks like star for reallife images), you need to upgrade "fd.setRGB(x, y..." a bit:
int w2 = img.getWidth() / 2;
int h2 = img.getHeight() / 2;
int newX = x + w2 >= img.getWidth() ? x - w2 : x + w2;
int newY = y + h2 >= img.getHeight() ? y - h2 : y + h2;
fd.setRGB(newX, newY, new Color(power, power, power).getRGB());
brightnessRGB and draw methods for the lazy:
public static int brightnessRGB(int rgb) {
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
return (r+g+b)/3;
}
private static void draw(BufferedImage img) {
JLabel picLabel = new JLabel(new ImageIcon(img));
JPanel jPanelMain = new JPanel();
jPanelMain.add(picLabel);
JFrame jFrame = new JFrame();
jFrame.add(jPanelMain);
jFrame.pack();
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jFrame.setVisible(true);
}
I know, I'm a bit late, but I just did all that for my program. So, let it be here for those, who'll get here from googling.
How to convert a white background of an image into a transparent background? Can anyone tel me how to do this?
The first result from Google is this:
Make a color transparent
http://www.rgagnon.com/javadetails/java-0265.html
It makes the Blue part of an image transparent, but I'm sure you can adapt that to use White intstead
(hint: Pass Color.WHITE to the makeColorTransparent function, instead of Color.BLUE)
Found a more complete and modern answer here: How to make a color transparent in a BufferedImage and save as PNG
This method will make background transparent. You need to pass the image you want to modify, colour, and tolerance.
final int color = ret.getRGB(0, 0);
final Image imageWithTransparency = makeColorTransparent(ret, new Color(color), 10);
final BufferedImage transparentImage = imageToBufferedImage(imageWithTransparency);
private static BufferedImage imageToBufferedImage(final Image image) {
final BufferedImage bufferedImage =
new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_INT_ARGB);
final Graphics2D g2 = bufferedImage.createGraphics();
g2.drawImage(image, 0, 0, null);
g2.dispose();
return bufferedImage;
}
private static Image makeColorTransparent(final BufferedImage im, final Color color, int tolerance) {
int temp = 0;
if (tolerance < 0 || tolerance > 100) {
System.err.println("The tolerance is a percentage, so the value has to be between 0 and 100.");
temp = 0;
} else {
temp = tolerance * (0xFF000000 | 0xFF000000) / 100;
}
final int toleranceRGB = Math.abs(temp);
final ImageFilter filter = new RGBImageFilter() {
// The color we are looking for (white)... Alpha bits are set to opaque
public int markerRGBFrom = (color.getRGB() | 0xFF000000) - toleranceRGB;
public int markerRGBTo = (color.getRGB() | 0xFF000000) + toleranceRGB;
public final int filterRGB(final int x, final int y, final int rgb) {
if ((rgb | 0xFF000000) >= markerRGBFrom && (rgb | 0xFF000000) <= markerRGBTo) {
// Mark the alpha bits as zero - transparent
return 0x00FFFFFF & rgb;
} else {
// Nothing to do
return rgb;
}
}
};
final ImageProducer ip = new FilteredImageSource(im.getSource(), filter);
return Toolkit.getDefaultToolkit().createImage(ip);
}
Here is my solution. This filter will remove the background from any image as long as the background image color is in the top left corner.
private static class BackgroundFilter extends RGBImageFilter{
boolean setUp = false;
int bgColor;
#Override
public int filterRGB(int x, int y, int rgb) {
int colorWOAlpha = rgb & 0xFFFFFF;
if( ! setUp && x == 0 && y == 0 ){
bgColor = colorWOAlpha;
setUp = true;
}
else if( colorWOAlpha == bgColor )
return colorWOAlpha;
return rgb;
}
}
Elsewhere...
ImageFilter bgFilter = new BackgroundFilter();
ImageProducer ip = new FilteredImageSource(image.getSource(), bgFilter);
image = Toolkit.getDefaultToolkit().createImage(ip);
I am aware that this question is over a decade old and that some answers have already been given. However, none of them is satisfactory if the pixels inside the image are the same color as the background. Let's take a practical example. Given these images:
both have a white background, but the white color is also inside the image to be cutout. In other words, the white pixels on the outside of the two pennants must become transparent, the ones on the inside must remain as they are. Add to this the complication that the white of the background is not perfectly white (due to jpeg compression), so a tolerance is needed. The issue can be made more complex by figures that are not only convex, but also concave.
I created an algorithm in Java that solves the problem very well, I tested it with the two figures shown here. The following code refers to the Java API of Codename One (https://www.codenameone.com/javadoc/), but can be repurposed to the Java SE API or implemented in other languages. The important thing is to understand the rationale.
/**
* Given an image with no transparency, it makes the white background
* transparent, provided that the entire image outline has a different color
* from the background; the internal pixels of the image, even if they have
* the same color as the background, are not changed.
*
* #param source image with a white background; the image must have an
* outline of a different color from background.
* #return a new image with a transparent background
*/
public static Image makeBackgroundTransparent(Image source) {
/*
* Algorithm
*
* Pixels must be iterated in the four possible directions: (1) left to
* right, for each row (top to bottom); (2) from right to left, for each
* row (from top to bottom); (3) from top to bottom, for each column
* (from left to right); (4) from bottom to top, for each column (from
* left to right).
*
* In each iteration, each white pixel is replaced with a transparent
* one. Each iteration ends when a pixel of color other than white (or
* a transparent pixel) is encountered.
*/
if (source == null) {
throw new IllegalArgumentException("ImageUtilities.makeBackgroundTransparent -> null source image");
}
if (source instanceof FontImage) {
source = ((FontImage) source).toImage();
}
int[] pixels = source.getRGB(); // array instance containing the ARGB data within this image
int width = source.getWidth();
int height = source.getHeight();
int tolerance = 1000000; // value chosen through several attempts
// check if the first pixel is transparent
if ((pixels[0] >> 24) == 0x00) {
return source; // nothing to do, the image already has a transparent background
}
Log.p("Converting white background to transparent...", Log.DEBUG);
// 1. Left to right, for each row (top to bottom)
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int color = pixels[y * width + x];
if ((color >> 24) != 0x00 && color >= ColorUtil.WHITE - tolerance && color <= ColorUtil.WHITE + tolerance) { // means white with tolerance and no transparency
pixels[y * width + x] = 0x00; // means full transparency
} else {
break;
}
}
}
// 2. Right to left, for each row (top to bottom)
for (int y = 0; y < height; y++) {
for (int x = width - 1; x >= 0; x--) {
int color = pixels[y * width + x];
if ((color >> 24) != 0x00 && color >= ColorUtil.WHITE - tolerance && color <= ColorUtil.WHITE + tolerance) { // means white with tolerance and no transparency
pixels[y * width + x] = 0x00; // means full transparency
} else {
break;
}
}
}
// 3. Top to bottom, for each column (from left to right)
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int color = pixels[y * width + x];
if ((color >> 24) != 0x00 && color >= ColorUtil.WHITE - tolerance && color <= ColorUtil.WHITE + tolerance) { // means white with tolerance and no transparency
pixels[y * width + x] = 0x00; // means full transparency
} else {
break;
}
}
}
// 4. Bottom to top, for each column (from left to right)
for (int x = 0; x < width; x++) {
for (int y = height - 1; y >= 0; y--) {
int color = pixels[y * width + x];
if ((color >> 24) != 0x00 && color >= ColorUtil.WHITE - tolerance && color <= ColorUtil.WHITE + tolerance) { // means white with tolerance and no transparency
pixels[y * width + x] = 0x00; // means full transparency
} else {
break;
}
}
}
return EncodedImage.createFromRGB(pixels, width, height, false);
}