Java/Image: How to make adjacent background pixels transparent? - java

There are a lot of questions about how to make the background-color of an image transparent, but all the anwers seem to use an RgbImageFilter to make every occurrence of a specific color transparent.
My question is, how would I implement this "background removal" in Java, so that it floods transparency from a fixed point (as per the "bucket" operation in Paint, or the RMagick function Image#matte_floodfill)?

As is the way with the Internet, I wound up on this page after a bit of searching trying to find some code that did something similar.
Here's my knocked-together solution. It's not perfect but it's perhaps a starting point for someone else trying to do it.
This works by choosing the four corners of the image, averaging them, and using that as the anchor colour. I use a Pixel class for what seemed like convenience initially and ended up wasting my time! Hah. As is the way.
public class Pixel implements Comparable{
int x,y;
public Pixel(int x, int y){
this.x = x;
this.y = y;
}
#Override
public int compareTo(Object arg0) {
Pixel p = (Pixel) arg0;
if(p.x == x && p.y == y)
return 0;
return -1;
}
}
And here's the beef:
public class ImageGrab {
private static int pixelSimilarityLimit = 20;
public static void main(String[] args){
BufferedImage image = null;
try {
URL url = new URL("http://animal-photography.com/thumbs/russian_blue_cat_side_view_on_~AP-0PR4DL-TH.jpg");
image = ImageIO.read(url);
} catch (IOException e) {
e.printStackTrace();
}
Color[] corners = new Color[]{new Color(image.getRGB(0, 0)),
new Color(image.getRGB(image.getWidth()-1, 0)),
new Color(image.getRGB(0, image.getHeight()-1)),
new Color(image.getRGB(image.getWidth()-1, image.getHeight()-1))};
int avr = 0, avb=0, avg=0, ava=0;
for(Color c : corners){
avr += c.getRed();
avb += c.getBlue();
avg += c.getGreen();
ava += c.getAlpha();
}
System.out.println(avr/4+","+avg/4+","+avb/4+","+ava/4);
for(Color c : corners){
if(Math.abs(c.getRed() - avr/4) < pixelSimilarityLimit &&
Math.abs(c.getBlue() - avb/4) < pixelSimilarityLimit &&
Math.abs(c.getGreen() - avg/4) < pixelSimilarityLimit &&
Math.abs(c.getAlpha() - ava/4) < pixelSimilarityLimit){
}
else{
return;
}
}
Color master = new Color(avr/4, avg/4, avb/4, ava/4);
System.out.println("Image sufficiently bordered.");
LinkedList<Pixel> open = new LinkedList<Pixel>();
LinkedList<Pixel> closed = new LinkedList<Pixel>();
open.add(new Pixel(0,0));
open.add(new Pixel(0,image.getHeight()-1));
open.add(new Pixel(image.getWidth()-1,0));
open.add(new Pixel(image.getWidth()-1,image.getHeight()-1));
while(open.size() > 0){
Pixel p = open.removeFirst();
closed.add(p);
for(int i=-1; i<2; i++){
for(int j=-1; j<2; j++){
if(i == 0 && j == 0)
continue;
if(p.x+i < 0 || p.x+i >= image.getWidth() || p.y+j < 0 || p.y+j >= image.getHeight())
continue;
Pixel thisPoint = new Pixel(p.x+i, p.y+j); boolean add = true;
for(Pixel pp : open)
if(thisPoint.x == pp.x && thisPoint.y == pp.y)
add = false;
for(Pixel pp : closed)
if(thisPoint.x == pp.x && thisPoint.y == pp.y)
add = false;
if(add && areSimilar(master,new Color(image.getRGB(p.x+i, p.y+j)))){
open.add(thisPoint);
}
}
}
}
for(Pixel p : closed){
Color c = new Color(image.getRGB(p.x, p.y));
Color newC = new Color(0, 0, 0, 0);
image.setRGB(p.x, p.y, newC.getRGB());
}
try {
File outputfile = new File("C:/Users/Mike/Desktop/saved.png");
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
}
}
public static boolean areSimilar(Color c, Color d){
if(Math.abs(c.getRed() - d.getRed()) < pixelSimilarityLimit &&
Math.abs(c.getBlue() - d.getBlue()) < pixelSimilarityLimit &&
Math.abs(c.getGreen() - d.getGreen()) < pixelSimilarityLimit &&
Math.abs(c.getAlpha() - d.getAlpha()) < pixelSimilarityLimit){
return true;
}
else{
return false;
}
}
}
In case anyone's worried, consider this public domain. Cheers! Hope it helps.

An unsatisfactory solution that I'm currently using is simply anticipating the background color that you're going to place your transparent image against (as you usually will do this) and using the solution with an RgbImageFilter as described here.
If someone wants to post a satisfactory solution, please do - until then, I'm going to accept this, as it works.

Here is something that I just put together to remove the background from a BufferedImage. It is pretty simple but there may be more efficient ways of doing it.
I have it set up with three inputs (a source image, the tolerance allowed, and the color that you want to replace the background with). It simply returns a buffered image with the changes made to it.
It finds the color near each corner and averages them to create a reference color then it replaces each pixel that is within the tolerance range of the reference.
In order the make the background transparent you would need to pass in
BufferedImage RemoveBackground(BufferedImage src, float tol, int color)
{
BufferedImage dest = src;
int h = dest.getHeight();
int w = dest.getWidth();
int refCol = -(dest.getRGB(2,2)+dest.getRGB(w-2,2)+dest.getRGB(2,h-2)+dest.getRGB(w-2,h-2))/4;
int Col = 0;
int x = 1;
int y = 1;
int upperBound = (int)(refCol*(1+tol));
int lowerBound = (int)(refCol*(1-tol));
while (x < w)
{
y = 1;
while (y < h)
{
Col = -dest.getRGB(x,y);
if (Col > lowerBound && Col < upperBound)
{
dest.setRGB(x,y,color);
}
y++;
}
x++;
}
return dest;
}
I know this is an old thread but hopefully this will come in handy for someone.
Edit: I just realized that this does not work for transparencies, just for replacing a RGB value with another RGB value. It would need a little adaptation to do ARGB values.

Related

BufferedImage slows down performance

I'm working on a game, nothing serious, just for fun.
I wrote a class 'ImageBuilder' to help creating some images.
Everything works fine, except one thing.
I initialize a variabile like this:
// other stuff
m_tile = new ImageBuilder(TILE_SIZE, TILE_SIZE, BufferedImage.TYPE_INT_RGB).paint(0xff069dee).paintBorder(0xff4c4a4a, 1).build();
// other stuff
Then, in the rendering method, i have:
for (int x = 0; x < 16; x++) {
for (int y = 0; y < 16; y++) {
g.drawImage(m_tile, x * (TILE_SIZE + m_padding.x) + m_margin.x, y * (TILE_SIZE + m_padding.y) + m_margin.y, null);
}
}
Note: m_padding and m_margin are just two Vector2i
This draws on the screen a simple 16x16 table using that image, but the game is almost frozen, i can't get more than like 10 FPS.
I tried to creating the image without that class, by doing this (TILE_SIZE = 32):
m_tile = new BufferedImage(TILE_SIZE, TILE_SIZE, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < TILE_SIZE; x++) {
for (int y = 0; y < TILE_SIZE; y++) {
if (x == 0 || y == 0 || x + 1 == TILE_SIZE || y + 1 == TILE_SIZE)
m_tile.setRGB(x, y, 0x4c4a4a);
else
m_tile.setRGB(x, y, 0x069dee);
}
}
This time, i get 60 FPS.
I can't figure out with is the difference, i used to creating image using 'ImageBuilder' and all is fine, but not this time.
ImageBuilder class:
// Constructor
public ImageBuilder(int width, int height, int imageType) {
this.m_width = width;
this.m_height = height;
this.m_image = new BufferedImage(m_width, m_height, imageType);
this.m_pixels = ((DataBufferInt) m_image.getRaster().getDataBuffer()).getData();
this.m_image_type = imageType;
}
public ImageBuilder paint(int color) {
for (int i = 0; i < m_pixels.length; i++) m_pixels[i] = color;
return this;
}
public ImageBuilder paintBorder(int color, int stroke) {
for (int x = 0; x < m_width; x++) {
for (int y = 0; y < m_height; y++) {
if (x < stroke || y < stroke || x + stroke >= m_width || y + stroke >= m_height) {
m_pixels[x + y * m_width] = color;
}
}
}
return this;
}
public BufferedImage build() {
return m_image;
}
There are other methods, but i don't call them, so i don't think is necessary to write them
What am i doing wrong?
My guess is that the problem is your ImageBuilder accessing the backing data array of the data buffer:
this.m_pixels = ((DataBufferInt) m_image.getRaster().getDataBuffer()).getData();
Doing so, may (will) ruin the chances for this image being hardware accelerated. This is documented behaviour, from the getData() API doc:
Note that calling this method may cause this DataBuffer object to be incompatible with performance optimizations used by some implementations (such as caching an associated image in video memory).
You could probably get around this easily, by using a temporary image in your bilder, and returning a copy of the temp image from the build() method, that has not been "tampered" with.
For best performance, always using a compatible image (as in createCompatibleImage(), mentioned by #VGR in the comments) is a good idea too. This should ensure you have the fastest possible hardware blits.

Algorithm to get all pixels between color border?

I have a long png file containing many sprites in a row, but their width/height changes by a little bit. However, all sprites have a fixed blue color 1px border around it.
However, after each sprite, the borders are connected to each other by 2px (just border after border that interacts) see this:
But at the bottom of the sprites, it misses one pixel point
Is there an existing algorithm that can get all pixels between a color border like this, including the border when giving the pixels?
Or any other ideas how to grab all sprites of one file like this and give them a fixed size?
I took your image and transformed it to match your description.
In plain text I went form left to right and identify lines that might indicate a start or end to an image and used a tracker variable to decide which is which.
I approached it like this in Java:
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.awt.image.Raster;
import java.io.File;
import java.io.IOException;
public class PixelArtSizeFinder {
public static void main(String[] args) throws IOException {
File imageFile = new File("pixel_boat.png");
BufferedImage image = ImageIO.read(imageFile);
int w = image.getWidth();
int h = image.getHeight();
System.out.format("Size: %dx%d%n", w, h);
Raster data = image.getData();
int objectsFound = 0;
int startObjectWidth = 0;
int endObjectWidth = 0;
boolean scanningObject = false;
for (int x = 0; x < w; x++) {
boolean verticalLineContainsOnlyTransparentOrBorder = true;
for (int y = 0; y < h; y++) {
int[] pixel = data.getPixel(x, y, new int[4]);
if (isOther(pixel)) {
verticalLineContainsOnlyTransparentOrBorder = false;
}
}
if (verticalLineContainsOnlyTransparentOrBorder) {
if (scanningObject) {
endObjectWidth = x;
System.out.format("Object %d: %d-%d (%dpx)%n",
objectsFound,
startObjectWidth,
endObjectWidth,
endObjectWidth - startObjectWidth);
} else {
objectsFound++;
startObjectWidth = x;
}
scanningObject ^= true; //toggle
}
}
}
private static boolean isTransparent(int[] pixel) {
return pixel[3] == 0;
}
private static boolean isBorder(int[] pixel) {
return pixel[0] == 0 && pixel[1] == 187 && pixel[2] == 255 && pixel[3] == 255;
}
private static boolean isOther(int[] pixel) {
return !isTransparent(pixel) && !isBorder(pixel);
}
}
and the result was
Size: 171x72
Object 1: 0-27 (27px)
Object 2: 28-56 (28px)
Object 3: 57-85 (28px)
Object 4: 86-113 (27px)
Object 5: 114-142 (28px)
Object 6: 143-170 (27px)
I don't know if any algorithm or function already exists for this but what you can do is :
while the boats are all the same and you wanna get all the pixels between two blue pixels so you can use something like this :
for all i in vertical pixels
for all j in horizontal pixels
if pixel(i,j) == blue then
j = j+ 1
while pixel(i,j) != blue then
you save this pixel in an array for example
j = j+1
end while
end if
end for
end for
This is just an idea and for sure not the most optimal but you can you use it and perform it to make it better ;)

How to get clear mask of users in simple-openni?

I am trying to extract user silhouette and put it above my images. I was able to make a mask and cut user from rgb image. But the contour is messy.
The question is how I can make the mask more precise (to fit real user). I've tried ERODE-DILATE filters, but they don't do much. Maybe I need some Feather filter like in Photoshop. Or I don't know.
Here is my code.
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
mask.updatePixels();
image(mask, 0, 0);
mask.filter(DILATE);
mask.filter(DILATE);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
}
It's good you're aligning the RGB and depth streams.
There are few things that could be improved in terms of efficiency:
No need to reload a black image every single frame (in the draw() loop) since you're modifying all the pixels anyway:
mask = loadImage("black640.jpg"); //just a black image
Also, since you don't need the x,y coordinates as you loop through the user data, you can use a single for loop which should be a bit faster:
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
instead of:
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
Another hacky thing you could do is retrieve the userImage() from SimpleOpenNI, instead of the userData() and apply a THRESHOLD filter to it, which in theory should give you the same result as above.
For example:
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
could be:
mask = context.userImage();
mask.filter(THRESHOLD);
In terms of filtering, if you want to shrink the silhouette you should ERODE and bluring should give you a bit of that Photoshop like feathering.
Note that some filter() calls take arguments (like BLUR), but others don't like the ERODE/DILATE morphological filters, but you can still roll your own loops to deal with that.
I also recommend having some sort of easy to tweak interface (it can be fancy slider or a simple keyboard shortcut) when playing with filters.
Here's a rough attempt at the refactored sketch with the above comments:
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 0;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640,480,RGB);
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
//you don't need to keep reloading the image every single frame since you're updating all the pixels bellow anyway
// mask = loadImage("black640.jpg"); //just a black image
// mask.loadPixels();
// int xSize = context.depthWidth();
// int ySize = context.depthHeight();
// for (int y = 0; y < ySize; y++) {
// for (int x = 0; x < xSize; x++) {
// int index = x + y*xSize;
// if (userMap[index]>0) {
// mask.pixels[index]=color(255, 255, 255);
// }
// }
// }
//a single loop is usually faster than a nested loop and you don't need the x,y coordinates anyway
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
//erode
for(int i = 0 ; i < erodeAmt ; i++) mask.filter(ERODE);
//dilate
for(int i = 0 ; i < dilateAmt; i++) mask.filter(DILATE);
//blur
mask.filter(BLUR,blurAmt);
mask.updatePixels();
//preview the mask after you process it
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
//print filter values for debugging purposes
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt,15,15);
}
void keyPressed(){
if(key == 'e') erodeAmt--;
if(key == 'E') erodeAmt++;
if(key == 'd') dilateAmt--;
if(key == 'D') dilateAmt++;
if(key == 'b') blurAmt--;
if(key == 'B') blurAmt++;
//constrain values
if(erodeAmt < 0) erodeAmt = 0;
if(dilateAmt < 0) dilateAmt = 0;
if(blurAmt < 0) blurAmt = 0;
}
Unfortunately I can't test with an actual sensor right now, so please use the concepts explained, but bare in mind the full sketch code isn't tested.
This above sketch (if it runs) should allow you to use keys to control the filter parameters (e/E to decrease/increase erosion, d/D for dilation, b/B for blur). Hopefully you'll get satisfactory results.
When working with SimpleOpenNI in general I advise recording an .oni file (check out the RecorderPlay example for that) of a person for the most common use case. This will save you some time on the long run when testing and will allow you to work remotely with the sensor detached. One thing to bare in mind, the depth resolution is reduced to half on recordings (but using a usingRecording boolean flag should keep things safe)
The last and probably most important point is about the quality of the end result. Your resulting image can't be that much better if the source image isn't easy to work with to begin with. The depth data from the original Kinect sensor isn't great. The Asus sensors feel a wee bit more stable, but still the difference is negligible in most cases. If you are going to stick to one of these sensors, make sure you've got a clear background and decent lighting (without too much direct warm light (sunlight, incandescent lightbulbs, etc.) since they may interfere with the sensor)
If you want a more accurate user cut and the above filtering doesn't get the results you're after, consider switching to a better sensor like KinectV2. The depth quality is much better and the sensor is less susceptible to direct warm light. This may mean you need to use Windows (I see there's a KinectPV2 wrapper available) or OpenFrameworks(c++ collections of libraries similar to Processing) with ofxKinectV2
I've tried built-in erode-dilate-blur in processing. But they are very inefficient. Every time I increment blurAmount in img.filter(BLUR,blurAmount), my FPS decreases by 5 frames.
So I decided to try opencv. It is much better in comparison. The result is satisfactory.
import SimpleOpenNI.*;
import processing.video.*;
import gab.opencv.*;
SimpleOpenNI context;
OpenCV opencv;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 1;
Movie mov;
void setup(){
opencv = new OpenCV(this, 640, 480);
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false) {
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640, 480, RGB);
mov = new Movie(this, "wild.mp4");
mov.play();
mov.speed(5);
mov.volume(0);
}
void movieEvent(Movie m) {
m.read();
}
void draw() {
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask.loadPixels();
for (int i = 0; i < numPixels; i++) {
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
mask.updatePixels();
opencv.loadImage(mask);
opencv.gray();
for (int i = 0; i < erodeAmt; i++) {
opencv.erode();
}
for (int i = 0; i < dilateAmt; i++) {
opencv.dilate();
}
if (blurAmt>0) {//blur with 0 amount causes error
opencv.blur(blurAmt);
}
mask = opencv.getSnapshot();
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(mov, context.depthWidth() + 10, 0);
image(rgb, context.depthWidth() + 10, 0);
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt, 15, 15);
}
void keyPressed() {
if (key == 'e') erodeAmt--;
if (key == 'E') erodeAmt++;
if (key == 'd') dilateAmt--;
if (key == 'D') dilateAmt++;
if (key == 'b') blurAmt--;
if (key == 'B') blurAmt++;
//constrain values
if (erodeAmt < 0) erodeAmt = 0;
if (dilateAmt < 0) dilateAmt = 0;
if (blurAmt < 0) blurAmt = 0;
}

compare bitmaps after taking views of the android screen - compare method not working

I'm trying to compare two different views to compare the image to see if it's the same or not. This is my code...
public boolean equals(View view1, View view2){
view1.setDrawingCacheEnabled(true);
view1.buildDrawingCache();
Bitmap b1 = view1.getDrawingCache();
view2.setDrawingCacheEnabled(true);
view2.buildDrawingCache();
Bitmap b2 = view2.getDrawingCache();
ByteBuffer buffer1 = ByteBuffer.allocate(b1.getHeight() * b1.getRowBytes());
b1.copyPixelsToBuffer(buffer1);
ByteBuffer buffer2 = ByteBuffer.allocate(b2.getHeight() * b2.getRowBytes());
b2.copyPixelsToBuffer(buffer2);
return Arrays.equals(buffer1.array(), buffer2.array());
}
However, this is returning true no matter what. Can anyone tell me why im doing wrong?
Not sure what's wrong with that code, if anything, but did you try Bitmap.sameAs(Bitmap)?
UPDATE: The code below works fine, but your code above seems to always return null from the .getDrawingCache() not sure if this is your problem or not. I don't have the time to look too deeply into this, but you might check getDrawingCache() returns null to see how a similar problem was solved, else please provide you logcat.
Here is a port (not really checked too stringently) of the sameAs function from API 15, introduced in API 12
One special check they do is to see if the image is alpha channel, and a few optimization's to avoid the array check if possible (probably not an issue with your use case) might as well take advantage of the open source when you can ;-)
boolean SameAs(Bitmap A, Bitmap B) {
// Different types of image
if(A.getConfig() != B.getConfig())
return false;
// Different sizes
if (A.getWidth() != B.getWidth())
return false;
if (A.getHeight() != B.getHeight())
return false;
// Allocate arrays - OK because at worst we have 3 bytes + Alpha (?)
int w = A.getWidth();
int h = A.getHeight();
int[] argbA = new int[w*h];
int[] argbB = new int[w*h];
A.getPixels(argbA, 0, w, 0, 0, w, h);
B.getPixels(argbB, 0, w, 0, 0, w, h);
// Alpha channel special check
if (A.getConfig() == Config.ALPHA_8) {
// in this case we have to manually compare the alpha channel as the rest is garbage.
final int length = w * h;
for (int i = 0 ; i < length ; i++) {
if ((argbA[i] & 0xFF000000) != (argbB[i] & 0xFF000000)) {
return false;
}
}
return true;
}
return Arrays.equals(argbA, argbB);
}
#Idistic's answer helped me to get another solution which is good also for images with higher resolutions which can cause OutOfMemory error. The main idea was to split the images into several parts and compare their bytes. In my case 10 parts was enough, I think it is enough for the most of the cases.
private boolean compareBitmaps(Bitmap bitmap1, Bitmap bitmap2)
{
if (Build.VERSION.SDK_INT > 11)
{
return bitmap1.sameAs(bitmap2);
}
int chunkNumbers = 10;
int rows, cols;
int chunkHeight, chunkWidth;
rows = cols = (int) Math.sqrt(chunkNumbers);
chunkHeight = bitmap1.getHeight() / rows;
chunkWidth = bitmap1.getWidth() / cols;
int yCoord = 0;
for (int x = 0; x < rows; x++)
{
int xCoord = 0;
for (int y = 0; y < cols; y++)
{
try
{
Bitmap bitmapChunk1 = Bitmap.createBitmap(bitmap1, xCoord, yCoord, chunkWidth, chunkHeight);
Bitmap bitmapChunk2 = Bitmap.createBitmap(bitmap2, xCoord, yCoord, chunkWidth, chunkHeight);
if (!sameAs(bitmapChunk1, bitmapChunk2))
{
recycleBitmaps(bitmapChunk1, bitmapChunk2);
return false;
}
recycleBitmaps(bitmapChunk1, bitmapChunk2);
xCoord += chunkWidth;
}
catch (Exception e)
{
return false;
}
}
yCoord += chunkHeight;
}
return true;
}
private boolean sameAs(Bitmap bitmap1, Bitmap bitmap2)
{
// Different types of image
if (bitmap1.getConfig() != bitmap2.getConfig())
return false;
// Different sizes
if (bitmap1.getWidth() != bitmap2.getWidth())
return false;
if (bitmap1.getHeight() != bitmap2.getHeight())
return false;
int w = bitmap1.getWidth();
int h = bitmap1.getHeight();
int[] argbA = new int[w * h];
int[] argbB = new int[w * h];
bitmap1.getPixels(argbA, 0, w, 0, 0, w, h);
bitmap2.getPixels(argbB, 0, w, 0, 0, w, h);
// Alpha channel special check
if (bitmap1.getConfig() == Bitmap.Config.ALPHA_8)
{
final int length = w * h;
for (int i = 0; i < length; i++)
{
if ((argbA[i] & 0xFF000000) != (argbB[i] & 0xFF000000))
{
return false;
}
}
return true;
}
return Arrays.equals(argbA, argbB);
}
private void recycleBitmaps(Bitmap bitmap1, Bitmap bitmap2)
{
bitmap1.recycle();
bitmap2.recycle();
bitmap1 = null;
bitmap2 = null;
}

Breaking bricks with chain reaction

I am developing a game in java just for fun. It is a ball brick breaking game of some sort.
Here is a level, when the ball hits one of the Orange bricks I would like to create a chain reaction to explode all other bricks that are NOT gray(unbreakable) and are within reach of the brick being exploded.
So it would clear out everything in this level without the gray bricks.
I am thinking I should ask the brick that is being exploded for other bricks to the LEFT, RIGHT, UP, and DOWN of that brick then start the same process with those cells.
//NOTE TO SELF: read up on Enums and List
When a explosive cell is hit with the ball it calls the explodeMyAdjecentCells();
//This is in the Cell class
public void explodeMyAdjecentCells() {
exploded = true;
ballGame.breakCell(x, y, imageURL[thickness - 1][0]);
cellBlocks.explodeCell(getX() - getWidth(),getY());
cellBlocks.explodeCell(getX() + getWidth(),getY());
cellBlocks.explodeCell(getX(),getY() - getHeight());
cellBlocks.explodeCell(getX(),getY() + getHeight());
remove();
ballGame.playSound("src\\ballgame\\Sound\\cellBrakes.wav", 100.0f, 0.0f, false, 0.0d);
}
//This is the CellHandler->(CellBlocks)
public void explodeCell(int _X, int _Y) {
for(int c = 0; c < cells.length; c++){
if(cells[c] != null && !cells[c].hasExploded()) {
if(cells[c].getX() == _X && cells[c].getY() == _Y) {
int type = cells[c].getThickness();
if(type != 7 && type != 6 && type != 2) {
cells[c].explodeMyAdjecentCells();
}
}
}
}
}
It successfully removes my all adjacent cells,
But in the explodeMyAdjecentCells() method, I have this line of code
ballGame.breakCell(x, y, imageURL[thickness - 1][0]);
//
This line tells the ParticleHandler to create 25 small images(particles) of the exploded cell.
Tough all my cells are removed the particleHandler do not create particles for all the removed cells.
The problem was solved youst now, its really stupid.
I had set particleHandler to create max 1500 particles. My god how did i not see that!
private int particleCellsMax = 1500;
private int particleCellsMax = 2500;
thx for all the help people, I will upload the source for creating the particles youst for fun if anyone needs it.
The source code for splitting image into parts was taken from:
Kalani's Tech Blog
//Particle Handler
public void breakCell(int _X, int _Y, String URL) {
File file = new File(URL);
try {
FileInputStream fis = new FileInputStream(file);
BufferedImage image = ImageIO.read(fis);
int rows = 5;
int colums = 5;
int parts = rows * colums;
int partWidth = image.getWidth() / colums;
int partHeight = image.getHeight() / rows;
int count = 0;
BufferedImage imgs[] = new BufferedImage[parts];
for(int x = 0; x < colums; x++) {
for(int y = 0; y < rows; y++) {
imgs[count] = new BufferedImage(partWidth, partHeight, image.getType());
Graphics2D g = imgs[count++].createGraphics();
g.drawImage(image, 0, 0, partWidth, partHeight, partWidth * y, partHeight * x, partWidth * y + partWidth, partHeight * x + partHeight, null);
g.dispose();
}
}
int numParts = imgs.length;
int c = 0;
for(int iy = 0; iy < rows; iy ++) {
for(int ix = 0; ix < colums; ix++) {
if(c < numParts) {
Image imagePart = Toolkit.getDefaultToolkit().createImage(imgs[c].getSource());
createCellPart(_X + ((image.getWidth() / colums) * ix), _Y + ((image.getHeight() / rows) * iy), c, imagePart);
c++;
} else {
break;
}
}
}
} catch(IOException io) {}
}
You could consider looking at this in a more OO way, and using 'tell don't ask'. So you would look at having a Brick class, which would know what its colour was, and its adjacent blocks. Then you would tell the first Block to explode, it would then know that if it was Orange (and maybe consider using Enums for this - not just numbers), then it would tell its adjacent Blocks to 'chain react' (or something like that), these blocks would then decide what to do (either explode in the case of an orange block - and call their adjacent blocks, or not in the case of a grey Block.
I know its quite different from what your doing currently, but will give you a better structured program hopefully.
I would imagine a method that would recursively get all touching cells of a similar color.
Then you can operate on that list (of all touching blocks) pretty easily and break all the ones are haven't been broken.
Also note that your getAdjentCell() method has side effects (it does the breaking) which isn't very intuitive based on the name.
// I agree with Matt that color (or type) should probably be an enum,
// or at least a class. int isn't very descriptive
public enum CellType { GRAY, RED, ORANGE }
public class Cell{
....
public final CellType type;
/**
* Recursively find all adjacent cells that have the same type as this one.
*/
public List<Cell> getTouchingSimilarCells() {
List<Cell> result = new ArrayList<Cell>();
result.add(this);
for (Cell c : getAdjecentCells()) {
if (c != null && c.type == this.type) {
result.addAll(c.getTouchingSimilarCells());
}
}
return result;
}
/**
* Get the 4 adjacent cells (above, below, left and right).<br/>
* NOTE: a cell may be null in the list if it does not exist.
*/
public List<Cell> getAdjecentCells() {
List<Cell> result = new ArrayList<Cell>();
result.add(cellBlock(this.getX() + 1, this.getY()));
result.add(cellBlock(this.getX() - 1, this.getY()));
result.add(cellBlock(this.getX(), this.getY() + 1));
result.add(cellBlock(this.getX(), this.getY() - 1));
return result;
}
}

Categories

Resources