I am developing a game in java just for fun. It is a ball brick breaking game of some sort.
Here is a level, when the ball hits one of the Orange bricks I would like to create a chain reaction to explode all other bricks that are NOT gray(unbreakable) and are within reach of the brick being exploded.
So it would clear out everything in this level without the gray bricks.
I am thinking I should ask the brick that is being exploded for other bricks to the LEFT, RIGHT, UP, and DOWN of that brick then start the same process with those cells.
//NOTE TO SELF: read up on Enums and List
When a explosive cell is hit with the ball it calls the explodeMyAdjecentCells();
//This is in the Cell class
public void explodeMyAdjecentCells() {
exploded = true;
ballGame.breakCell(x, y, imageURL[thickness - 1][0]);
cellBlocks.explodeCell(getX() - getWidth(),getY());
cellBlocks.explodeCell(getX() + getWidth(),getY());
cellBlocks.explodeCell(getX(),getY() - getHeight());
cellBlocks.explodeCell(getX(),getY() + getHeight());
remove();
ballGame.playSound("src\\ballgame\\Sound\\cellBrakes.wav", 100.0f, 0.0f, false, 0.0d);
}
//This is the CellHandler->(CellBlocks)
public void explodeCell(int _X, int _Y) {
for(int c = 0; c < cells.length; c++){
if(cells[c] != null && !cells[c].hasExploded()) {
if(cells[c].getX() == _X && cells[c].getY() == _Y) {
int type = cells[c].getThickness();
if(type != 7 && type != 6 && type != 2) {
cells[c].explodeMyAdjecentCells();
}
}
}
}
}
It successfully removes my all adjacent cells,
But in the explodeMyAdjecentCells() method, I have this line of code
ballGame.breakCell(x, y, imageURL[thickness - 1][0]);
//
This line tells the ParticleHandler to create 25 small images(particles) of the exploded cell.
Tough all my cells are removed the particleHandler do not create particles for all the removed cells.
The problem was solved youst now, its really stupid.
I had set particleHandler to create max 1500 particles. My god how did i not see that!
private int particleCellsMax = 1500;
private int particleCellsMax = 2500;
thx for all the help people, I will upload the source for creating the particles youst for fun if anyone needs it.
The source code for splitting image into parts was taken from:
Kalani's Tech Blog
//Particle Handler
public void breakCell(int _X, int _Y, String URL) {
File file = new File(URL);
try {
FileInputStream fis = new FileInputStream(file);
BufferedImage image = ImageIO.read(fis);
int rows = 5;
int colums = 5;
int parts = rows * colums;
int partWidth = image.getWidth() / colums;
int partHeight = image.getHeight() / rows;
int count = 0;
BufferedImage imgs[] = new BufferedImage[parts];
for(int x = 0; x < colums; x++) {
for(int y = 0; y < rows; y++) {
imgs[count] = new BufferedImage(partWidth, partHeight, image.getType());
Graphics2D g = imgs[count++].createGraphics();
g.drawImage(image, 0, 0, partWidth, partHeight, partWidth * y, partHeight * x, partWidth * y + partWidth, partHeight * x + partHeight, null);
g.dispose();
}
}
int numParts = imgs.length;
int c = 0;
for(int iy = 0; iy < rows; iy ++) {
for(int ix = 0; ix < colums; ix++) {
if(c < numParts) {
Image imagePart = Toolkit.getDefaultToolkit().createImage(imgs[c].getSource());
createCellPart(_X + ((image.getWidth() / colums) * ix), _Y + ((image.getHeight() / rows) * iy), c, imagePart);
c++;
} else {
break;
}
}
}
} catch(IOException io) {}
}
You could consider looking at this in a more OO way, and using 'tell don't ask'. So you would look at having a Brick class, which would know what its colour was, and its adjacent blocks. Then you would tell the first Block to explode, it would then know that if it was Orange (and maybe consider using Enums for this - not just numbers), then it would tell its adjacent Blocks to 'chain react' (or something like that), these blocks would then decide what to do (either explode in the case of an orange block - and call their adjacent blocks, or not in the case of a grey Block.
I know its quite different from what your doing currently, but will give you a better structured program hopefully.
I would imagine a method that would recursively get all touching cells of a similar color.
Then you can operate on that list (of all touching blocks) pretty easily and break all the ones are haven't been broken.
Also note that your getAdjentCell() method has side effects (it does the breaking) which isn't very intuitive based on the name.
// I agree with Matt that color (or type) should probably be an enum,
// or at least a class. int isn't very descriptive
public enum CellType { GRAY, RED, ORANGE }
public class Cell{
....
public final CellType type;
/**
* Recursively find all adjacent cells that have the same type as this one.
*/
public List<Cell> getTouchingSimilarCells() {
List<Cell> result = new ArrayList<Cell>();
result.add(this);
for (Cell c : getAdjecentCells()) {
if (c != null && c.type == this.type) {
result.addAll(c.getTouchingSimilarCells());
}
}
return result;
}
/**
* Get the 4 adjacent cells (above, below, left and right).<br/>
* NOTE: a cell may be null in the list if it does not exist.
*/
public List<Cell> getAdjecentCells() {
List<Cell> result = new ArrayList<Cell>();
result.add(cellBlock(this.getX() + 1, this.getY()));
result.add(cellBlock(this.getX() - 1, this.getY()));
result.add(cellBlock(this.getX(), this.getY() + 1));
result.add(cellBlock(this.getX(), this.getY() - 1));
return result;
}
}
Related
I'm working on a game, nothing serious, just for fun.
I wrote a class 'ImageBuilder' to help creating some images.
Everything works fine, except one thing.
I initialize a variabile like this:
// other stuff
m_tile = new ImageBuilder(TILE_SIZE, TILE_SIZE, BufferedImage.TYPE_INT_RGB).paint(0xff069dee).paintBorder(0xff4c4a4a, 1).build();
// other stuff
Then, in the rendering method, i have:
for (int x = 0; x < 16; x++) {
for (int y = 0; y < 16; y++) {
g.drawImage(m_tile, x * (TILE_SIZE + m_padding.x) + m_margin.x, y * (TILE_SIZE + m_padding.y) + m_margin.y, null);
}
}
Note: m_padding and m_margin are just two Vector2i
This draws on the screen a simple 16x16 table using that image, but the game is almost frozen, i can't get more than like 10 FPS.
I tried to creating the image without that class, by doing this (TILE_SIZE = 32):
m_tile = new BufferedImage(TILE_SIZE, TILE_SIZE, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < TILE_SIZE; x++) {
for (int y = 0; y < TILE_SIZE; y++) {
if (x == 0 || y == 0 || x + 1 == TILE_SIZE || y + 1 == TILE_SIZE)
m_tile.setRGB(x, y, 0x4c4a4a);
else
m_tile.setRGB(x, y, 0x069dee);
}
}
This time, i get 60 FPS.
I can't figure out with is the difference, i used to creating image using 'ImageBuilder' and all is fine, but not this time.
ImageBuilder class:
// Constructor
public ImageBuilder(int width, int height, int imageType) {
this.m_width = width;
this.m_height = height;
this.m_image = new BufferedImage(m_width, m_height, imageType);
this.m_pixels = ((DataBufferInt) m_image.getRaster().getDataBuffer()).getData();
this.m_image_type = imageType;
}
public ImageBuilder paint(int color) {
for (int i = 0; i < m_pixels.length; i++) m_pixels[i] = color;
return this;
}
public ImageBuilder paintBorder(int color, int stroke) {
for (int x = 0; x < m_width; x++) {
for (int y = 0; y < m_height; y++) {
if (x < stroke || y < stroke || x + stroke >= m_width || y + stroke >= m_height) {
m_pixels[x + y * m_width] = color;
}
}
}
return this;
}
public BufferedImage build() {
return m_image;
}
There are other methods, but i don't call them, so i don't think is necessary to write them
What am i doing wrong?
My guess is that the problem is your ImageBuilder accessing the backing data array of the data buffer:
this.m_pixels = ((DataBufferInt) m_image.getRaster().getDataBuffer()).getData();
Doing so, may (will) ruin the chances for this image being hardware accelerated. This is documented behaviour, from the getData() API doc:
Note that calling this method may cause this DataBuffer object to be incompatible with performance optimizations used by some implementations (such as caching an associated image in video memory).
You could probably get around this easily, by using a temporary image in your bilder, and returning a copy of the temp image from the build() method, that has not been "tampered" with.
For best performance, always using a compatible image (as in createCompatibleImage(), mentioned by #VGR in the comments) is a good idea too. This should ensure you have the fastest possible hardware blits.
(please don't mark this question as not clear, I spent a lot of time posting it ;) )
Okay, I am trying to make a simple 2d java game engine as a learning project, and part of it is rendering a filled polygon as a feature.
I am creating this algorithm my self, and I really can't figure out what I am doing wrong.
My though process is something like so:
Loop through every line, get the number of points in that line, then get the X location of every point in that line,
Then loop through the line again this time checking if the x in the loop is inside one of the lines in the points array, if so, draw it.
Disclaimer: the Polygon class is another type of mesh, and its draw method returns an int array with lines drawn through each vertex.
Disclaimer 2: I've tried other people's solutions but none really helped me and none really explained it properly (which is not the point in a learning project).
The draw methods are called one per frame.
FilledPolygon:
#Override
public int[] draw() {
int[] pixels = new Polygon(verts).draw();
int[] filled = new int[width * height];
for (int y = 0; y < height; y++) {
int count = 0;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
count++;
}
}
int[] points = new int[count];
int current = 0;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
points[current] = x;
current++;
}
}
if (count >= 2) {
int num = count;
if (count % 2 != 0)
num--;
for (int i = 0; i < num; i += 2) {
for (int x = points[i]; x < points[i+1]; x++) {
filled[x + y * width] = 0xffffffff;
}
}
}
}
return filled;
}
The Polygon class simply uses Bresenham's line algorithm and has nothing to do with the problem.
The game class:
#Override
public void load() {
obj = new EngineObject();
obj.addComponent(new MeshRenderer(new FilledPolygon(new int[][] {
{0,0},
{60, 0},
{0, 60},
{80, 50}
})));
((MeshRenderer)(obj.getComponent(MeshRenderer.class))).color = CYAN;
obj.transform.position.Y = 100;
}
The expected result is to get this shape filled up.(it was created using the polygon mesh):
The actual result of using the FilledPolygon mesh:
You code seems to have several problems and I will not focus on that.
Your approach based on drawing the outline then filling the "inside" runs cannot work in the general case because the outlines join at the vertices and intersections, and the alternation outside-edge-inside-edge-outside is broken, in an unrecoverable way (you can't know which segment to fill by just looking at a row).
You'd better use a standard polygon filling algorithm. You will find many descriptions on the Web.
For a simple but somewhat inefficient solution, work as follows:
process all lines between the minimum and maximum ordinates; let Y be the current ordinate;
loop on the edges;
assign every vertex a positive or negative sign if y ≥ Y or y < Y (mind the asymmetry !);
whenever the endpoints of an edge have a different sign, compute the intersection between the edge and the line;
you will get an even number of intersections; sort them horizontally;
draw between every other point.
You can get a more efficient solution by keeping a trace of which edges cross the current line, in a so-called "active list". Check the algorithms known as "scanline fill".
Note that you imply that pixels[] has the same width*height size as filled[]. Based on the mangled output, I would say that they are just not the same.
Otherwise if you just want to fill a scanline (assuming everything is convex), that code is overcomplicated, simply look for the endpoints and loop between them:
public int[] draw() {
int[] pixels = new Polygon(verts).draw();
int[] filled = new int[width * height];
for (int y = 0; y < height; y++) {
int left = -1;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
left = x;
break;
}
}
if (left >= 0) {
int right = left;
for (int x = width - 1; x > left; x--) {
if (pixels[x + y * width] == 0xffffffff) {
right = x;
break;
}
}
for (int x = left; x <= right; x++) {
filled[x + y * width] = 0xffffffff;
}
}
}
return filled;
}
However this kind of approach relies on having the entire polygon in the view, which may not always be the case in real life.
To improve my knowledge of imaging and get some experience working with the topics, I decided to create a license plate recognition algorithm on the Android platform.
The first step is detection, for which I decided to implement a recent paper titled "A Robust and Efficient Approach to License Plate Detection". The paper presents their idea very well and uses quite simple techniques to achieve detection. Besides some details lacking in the paper, I implemented the bilinear downsampling, converting to gray scale, and the edging + adaptive thresholding as described in Section 3A, 3B.1, and 3B.2.
Unfortunately, I am not getting the output this paper presents in e.g. figure 3 and 6.
The image I use for testing is as follows:
The gray scale (and downsampled) version looks fine (see the bottom of this post for the actual implementation), I used a well-known combination of the RGB components to produce it (paper does not mention how, so I took a guess).
Next is the initial edge detection using the Sobel filter outlined. This produces an image similar to the ones presented in figure 6 of the paper.
And finally, the remove the "weak edges" they apply adaptive thresholding using a 20x20 window. Here is where things go wrong.
As you can see, it does not function properly, even though I am using their stated parameter values. Additionally I have tried:
Changing the beta parameter.
Use a 2d int array instead of Bitmap objects to simplify creating the integral image.
Try a higher Gamma parameter so the initial edge detection allows more "edges".
Change the window to e.g. 10x10.
Yet none of the changes made an improvement; it keeps producing images as the one above. My question is: what am I doing different than what is outlined in the paper? and how can I get the desired output?
Code
The (cleaned) code I use:
public int[][] toGrayscale(Bitmap bmpOriginal) {
int width = bmpOriginal.getWidth();
int height = bmpOriginal.getHeight();
// color information
int A, R, G, B;
int pixel;
int[][] greys = new int[width][height];
// scan through all pixels
for (int x = 0; x < width; ++x) {
for (int y = 0; y < height; ++y) {
// get pixel color
pixel = bmpOriginal.getPixel(x, y);
R = Color.red(pixel);
G = Color.green(pixel);
B = Color.blue(pixel);
int gray = (int) (0.2989 * R + 0.5870 * G + 0.1140 * B);
greys[x][y] = gray;
}
}
return greys;
}
The code for edge detection:
private int[][] detectEges(int[][] detectionBitmap) {
int width = detectionBitmap.length;
int height = detectionBitmap[0].length;
int[][] edges = new int[width][height];
// Loop over all pixels in the bitmap
int c1 = 0;
int c2 = 0;
for (int y = 0; y < height; y++) {
for (int x = 2; x < width -2; x++) {
// Calculate d0 for each pixel
int p0 = detectionBitmap[x][y];
int p1 = detectionBitmap[x-1][y];
int p2 = detectionBitmap[x+1][y];
int p3 = detectionBitmap[x-2][y];
int p4 = detectionBitmap[x+2][y];
int d0 = Math.abs(p1 + p2 - 2*p0) + Math.abs(p3 + p4 - 2*p0);
if(d0 >= Gamma) {
c1++;
edges[x][y] = Gamma;
} else {
c2++;
edges[x][y] = d0;
}
}
}
return edges;
}
The code for adaptive thresholding. The SAT implementation is taken from here:
private int[][] AdaptiveThreshold(int[][] detectionBitmap) {
// Create the integral image
processSummedAreaTable(detectionBitmap);
int width = detectionBitmap.length;
int height = detectionBitmap[0].length;
int[][] binaryImage = new int[width][height];
int white = 0;
int black = 0;
int h_w = 20; // The window size
int half = h_w/2;
// Loop over all pixels in the bitmap
for (int y = half; y < height - half; y++) {
for (int x = half; x < width - half; x++) {
// Calculate d0 for each pixel
int sum = 0;
for(int k = -half; k < half - 1; k++) {
for (int j = -half; j < half - 1; j++) {
sum += detectionBitmap[x + k][y + j];
}
}
if(detectionBitmap[x][y] >= (sum / (h_w * h_w)) * Beta) {
binaryImage[x][y] = 255;
white++;
} else {
binaryImage[x][y] = 0;
black++;
}
}
}
return binaryImage;
}
/**
* Process given matrix into its summed area table (in-place)
* O(MN) time, O(1) space
* #param matrix source matrix
*/
private void processSummedAreaTable(int[][] matrix) {
int rowSize = matrix.length;
int colSize = matrix[0].length;
for (int i=0; i<rowSize; i++) {
for (int j=0; j<colSize; j++) {
matrix[i][j] = getVal(i, j, matrix);
}
}
}
/**
* Helper method for processSummedAreaTable
* #param row current row number
* #param col current column number
* #param matrix source matrix
* #return sub-matrix sum
*/
private int getVal (int row, int col, int[][] matrix) {
int leftSum; // sub matrix sum of left matrix
int topSum; // sub matrix sum of top matrix
int topLeftSum; // sub matrix sum of top left matrix
int curr = matrix[row][col]; // current cell value
/* top left value is itself */
if (row == 0 && col == 0) {
return curr;
}
/* top row */
else if (row == 0) {
leftSum = matrix[row][col - 1];
return curr + leftSum;
}
/* left-most column */
if (col == 0) {
topSum = matrix[row - 1][col];
return curr + topSum;
}
else {
leftSum = matrix[row][col - 1];
topSum = matrix[row - 1][col];
topLeftSum = matrix[row - 1][col - 1]; // overlap between leftSum and topSum
return curr + leftSum + topSum - topLeftSum;
}
}
Marvin provides an approach to find text regions. Perhaps it can be a start point for you:
Find Text Regions in Images:
http://marvinproject.sourceforge.net/en/examples/findTextRegions.html
This approach was also used in this question:
How do I separates text region from image in java
Using your image I got this output:
Source Code:
package textRegions;
import static marvin.MarvinPluginCollection.findTextRegions;
import java.awt.Color;
import java.util.List;
import marvin.image.MarvinImage;
import marvin.image.MarvinSegment;
import marvin.io.MarvinImageIO;
public class FindVehiclePlate {
public FindVehiclePlate() {
MarvinImage image = MarvinImageIO.loadImage("./res/vehicle.jpg");
image = findText(image, 30, 20, 100, 170);
MarvinImageIO.saveImage(image, "./res/vehicle_out.png");
}
public MarvinImage findText(MarvinImage image, int maxWhiteSpace, int maxFontLineWidth, int minTextWidth, int grayScaleThreshold){
List<MarvinSegment> segments = findTextRegions(image, maxWhiteSpace, maxFontLineWidth, minTextWidth, grayScaleThreshold);
for(MarvinSegment s:segments){
if(s.height >= 10){
s.y1-=20;
s.y2+=20;
image.drawRect(s.x1, s.y1, s.x2-s.x1, s.y2-s.y1, Color.red);
image.drawRect(s.x1+1, s.y1+1, (s.x2-s.x1)-2, (s.y2-s.y1)-2, Color.red);
image.drawRect(s.x1+2, s.y1+2, (s.x2-s.x1)-4, (s.y2-s.y1)-4, Color.red);
}
}
return image;
}
public static void main(String[] args) {
new FindVehiclePlate();
}
}
Well i have been watching a couple of videos of youtube on how take sprites from a spritesheet (8x8) and i really liked the tutorial by DesignsByZepher. However the method he uses results in him importing a sorite sheet and then changing the colors to in-code selected colours.
http://www.youtube.com/watch?v=6FMgQNDNMJc displaying the sheet
http://www.youtube.com/watch?v=7eotyB7oNHE for the color rendering
The code that i have made from watching his video is:
package exikle.learn.game.gfx;
import java.awt.image.BufferedImage;
import java.io.IOException;
import javax.imageio.ImageIO;
public class SpriteSheet {
public String path;
public int width;
public int height;
public int[] pixels;
public SpriteSheet(String path) {
BufferedImage image = null;
try {
image = ImageIO.read(SpriteSheet.class.getResourceAsStream(path));
} catch (IOException e) {
e.printStackTrace();
}
if (image == null) { return; }
this.path = path;
this.width = image.getWidth();
this.height = image.getHeight();
pixels = image.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < pixels.length; i++) {
pixels[i] = (pixels[i] & 0xff) / 64;
}
}
}
^This is the code where an image gets imported
package exikle.learn.game.gfx;
public class Colours {
public static int get(int colour1, int colour2, int colour3, int colour4) {
return (get(colour4) << 24) + (get(colour3) << 16)
+ (get(colour2) << 8) + get(colour1);
}
private static int get(int colour) {
if (colour < 0)
return 255;
int r = colour / 100 % 10;
int g = colour / 10 % 10;
int b = colour % 10;
return r * 36 + g * 6 + b;
}
}
^ and the code which i think deals with all the colors but im kinda confused about this.
My question is how do i remove the color modifier and just import and display the sprite sheet as is, so with the color it already has?
So you're fiddling with the Minicraft source, I see. The thing about Notch's code is that he substantially limited himself technically in this game. What the engine is doing is basically saying every sprite/tile can have 4 colors (from the grey-scaled spritesheet), he generates his own color palette that he retrieves colors from and sets accordingly during rendering. I can't remember exactly how many bits per channel he set and such.
However, you obviously are very new to programming and imo there's nothing better than fiddling with and analyzing other people's code.. that is, if you actually can do so. The Screen class is where the rendering takes place and hence it's what uses the spritesheet and therefore gives color accordingly to whatever tile you tell it to get. Markus is quite clever, despite poorly written code (which is completely forgiven as he did have 48 hours to make the damned thing ;))
if you want to just display the spritesheet as is, you can either rewrite the render function or overload it to something like this... (in class Screen)
public void render() {
for(int y = 0; y < h; y++) {
if(y >= sheet.h) continue; //prevent going out of bounds on y-axis
for(int x = 0; x < w; x++) {
if(x >= sheet.w) continue; //prevent going out of bounds on x-axis
pixels[x + y * w] = sheet.pixels[x + y * sheet.w];
}
}
}
This will just put whatever of the sheet it can fit into the screen for rendering (it's a really simple piece of code, but should work), the next step will be copying the pixels over to the actual raster for display, which I'm sure you can handle. (If you have copy-pasted all of the minicraft source code or some other slightly modified source code, you might want to change some things about that as well.)
All the cheers!
This basics would be to replace the get(int) method...
private static int get(int colour) {
//if (colour < 0)
// return 255;
//int r = colour / 100 % 10;
//int g = colour / 10 % 10;
//int b = colour % 10;
//return r * 36 + g * 6 + b;
return colour;
}
I'd also get rid of
for (int i = 0; i < pixels.length; i++) {
pixels[i] = (pixels[i] & 0xff) / 64;
}
From the main method
But to be honest, wouldn't it be easier to simply use BufferedImage#getSubImage?
There are a lot of questions about how to make the background-color of an image transparent, but all the anwers seem to use an RgbImageFilter to make every occurrence of a specific color transparent.
My question is, how would I implement this "background removal" in Java, so that it floods transparency from a fixed point (as per the "bucket" operation in Paint, or the RMagick function Image#matte_floodfill)?
As is the way with the Internet, I wound up on this page after a bit of searching trying to find some code that did something similar.
Here's my knocked-together solution. It's not perfect but it's perhaps a starting point for someone else trying to do it.
This works by choosing the four corners of the image, averaging them, and using that as the anchor colour. I use a Pixel class for what seemed like convenience initially and ended up wasting my time! Hah. As is the way.
public class Pixel implements Comparable{
int x,y;
public Pixel(int x, int y){
this.x = x;
this.y = y;
}
#Override
public int compareTo(Object arg0) {
Pixel p = (Pixel) arg0;
if(p.x == x && p.y == y)
return 0;
return -1;
}
}
And here's the beef:
public class ImageGrab {
private static int pixelSimilarityLimit = 20;
public static void main(String[] args){
BufferedImage image = null;
try {
URL url = new URL("http://animal-photography.com/thumbs/russian_blue_cat_side_view_on_~AP-0PR4DL-TH.jpg");
image = ImageIO.read(url);
} catch (IOException e) {
e.printStackTrace();
}
Color[] corners = new Color[]{new Color(image.getRGB(0, 0)),
new Color(image.getRGB(image.getWidth()-1, 0)),
new Color(image.getRGB(0, image.getHeight()-1)),
new Color(image.getRGB(image.getWidth()-1, image.getHeight()-1))};
int avr = 0, avb=0, avg=0, ava=0;
for(Color c : corners){
avr += c.getRed();
avb += c.getBlue();
avg += c.getGreen();
ava += c.getAlpha();
}
System.out.println(avr/4+","+avg/4+","+avb/4+","+ava/4);
for(Color c : corners){
if(Math.abs(c.getRed() - avr/4) < pixelSimilarityLimit &&
Math.abs(c.getBlue() - avb/4) < pixelSimilarityLimit &&
Math.abs(c.getGreen() - avg/4) < pixelSimilarityLimit &&
Math.abs(c.getAlpha() - ava/4) < pixelSimilarityLimit){
}
else{
return;
}
}
Color master = new Color(avr/4, avg/4, avb/4, ava/4);
System.out.println("Image sufficiently bordered.");
LinkedList<Pixel> open = new LinkedList<Pixel>();
LinkedList<Pixel> closed = new LinkedList<Pixel>();
open.add(new Pixel(0,0));
open.add(new Pixel(0,image.getHeight()-1));
open.add(new Pixel(image.getWidth()-1,0));
open.add(new Pixel(image.getWidth()-1,image.getHeight()-1));
while(open.size() > 0){
Pixel p = open.removeFirst();
closed.add(p);
for(int i=-1; i<2; i++){
for(int j=-1; j<2; j++){
if(i == 0 && j == 0)
continue;
if(p.x+i < 0 || p.x+i >= image.getWidth() || p.y+j < 0 || p.y+j >= image.getHeight())
continue;
Pixel thisPoint = new Pixel(p.x+i, p.y+j); boolean add = true;
for(Pixel pp : open)
if(thisPoint.x == pp.x && thisPoint.y == pp.y)
add = false;
for(Pixel pp : closed)
if(thisPoint.x == pp.x && thisPoint.y == pp.y)
add = false;
if(add && areSimilar(master,new Color(image.getRGB(p.x+i, p.y+j)))){
open.add(thisPoint);
}
}
}
}
for(Pixel p : closed){
Color c = new Color(image.getRGB(p.x, p.y));
Color newC = new Color(0, 0, 0, 0);
image.setRGB(p.x, p.y, newC.getRGB());
}
try {
File outputfile = new File("C:/Users/Mike/Desktop/saved.png");
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
}
}
public static boolean areSimilar(Color c, Color d){
if(Math.abs(c.getRed() - d.getRed()) < pixelSimilarityLimit &&
Math.abs(c.getBlue() - d.getBlue()) < pixelSimilarityLimit &&
Math.abs(c.getGreen() - d.getGreen()) < pixelSimilarityLimit &&
Math.abs(c.getAlpha() - d.getAlpha()) < pixelSimilarityLimit){
return true;
}
else{
return false;
}
}
}
In case anyone's worried, consider this public domain. Cheers! Hope it helps.
An unsatisfactory solution that I'm currently using is simply anticipating the background color that you're going to place your transparent image against (as you usually will do this) and using the solution with an RgbImageFilter as described here.
If someone wants to post a satisfactory solution, please do - until then, I'm going to accept this, as it works.
Here is something that I just put together to remove the background from a BufferedImage. It is pretty simple but there may be more efficient ways of doing it.
I have it set up with three inputs (a source image, the tolerance allowed, and the color that you want to replace the background with). It simply returns a buffered image with the changes made to it.
It finds the color near each corner and averages them to create a reference color then it replaces each pixel that is within the tolerance range of the reference.
In order the make the background transparent you would need to pass in
BufferedImage RemoveBackground(BufferedImage src, float tol, int color)
{
BufferedImage dest = src;
int h = dest.getHeight();
int w = dest.getWidth();
int refCol = -(dest.getRGB(2,2)+dest.getRGB(w-2,2)+dest.getRGB(2,h-2)+dest.getRGB(w-2,h-2))/4;
int Col = 0;
int x = 1;
int y = 1;
int upperBound = (int)(refCol*(1+tol));
int lowerBound = (int)(refCol*(1-tol));
while (x < w)
{
y = 1;
while (y < h)
{
Col = -dest.getRGB(x,y);
if (Col > lowerBound && Col < upperBound)
{
dest.setRGB(x,y,color);
}
y++;
}
x++;
}
return dest;
}
I know this is an old thread but hopefully this will come in handy for someone.
Edit: I just realized that this does not work for transparencies, just for replacing a RGB value with another RGB value. It would need a little adaptation to do ARGB values.