How to get clear mask of users in simple-openni? - java

I am trying to extract user silhouette and put it above my images. I was able to make a mask and cut user from rgb image. But the contour is messy.
The question is how I can make the mask more precise (to fit real user). I've tried ERODE-DILATE filters, but they don't do much. Maybe I need some Feather filter like in Photoshop. Or I don't know.
Here is my code.
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
mask.updatePixels();
image(mask, 0, 0);
mask.filter(DILATE);
mask.filter(DILATE);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
}

It's good you're aligning the RGB and depth streams.
There are few things that could be improved in terms of efficiency:
No need to reload a black image every single frame (in the draw() loop) since you're modifying all the pixels anyway:
mask = loadImage("black640.jpg"); //just a black image
Also, since you don't need the x,y coordinates as you loop through the user data, you can use a single for loop which should be a bit faster:
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
instead of:
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
Another hacky thing you could do is retrieve the userImage() from SimpleOpenNI, instead of the userData() and apply a THRESHOLD filter to it, which in theory should give you the same result as above.
For example:
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
could be:
mask = context.userImage();
mask.filter(THRESHOLD);
In terms of filtering, if you want to shrink the silhouette you should ERODE and bluring should give you a bit of that Photoshop like feathering.
Note that some filter() calls take arguments (like BLUR), but others don't like the ERODE/DILATE morphological filters, but you can still roll your own loops to deal with that.
I also recommend having some sort of easy to tweak interface (it can be fancy slider or a simple keyboard shortcut) when playing with filters.
Here's a rough attempt at the refactored sketch with the above comments:
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 0;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640,480,RGB);
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
//you don't need to keep reloading the image every single frame since you're updating all the pixels bellow anyway
// mask = loadImage("black640.jpg"); //just a black image
// mask.loadPixels();
// int xSize = context.depthWidth();
// int ySize = context.depthHeight();
// for (int y = 0; y < ySize; y++) {
// for (int x = 0; x < xSize; x++) {
// int index = x + y*xSize;
// if (userMap[index]>0) {
// mask.pixels[index]=color(255, 255, 255);
// }
// }
// }
//a single loop is usually faster than a nested loop and you don't need the x,y coordinates anyway
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
//erode
for(int i = 0 ; i < erodeAmt ; i++) mask.filter(ERODE);
//dilate
for(int i = 0 ; i < dilateAmt; i++) mask.filter(DILATE);
//blur
mask.filter(BLUR,blurAmt);
mask.updatePixels();
//preview the mask after you process it
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
//print filter values for debugging purposes
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt,15,15);
}
void keyPressed(){
if(key == 'e') erodeAmt--;
if(key == 'E') erodeAmt++;
if(key == 'd') dilateAmt--;
if(key == 'D') dilateAmt++;
if(key == 'b') blurAmt--;
if(key == 'B') blurAmt++;
//constrain values
if(erodeAmt < 0) erodeAmt = 0;
if(dilateAmt < 0) dilateAmt = 0;
if(blurAmt < 0) blurAmt = 0;
}
Unfortunately I can't test with an actual sensor right now, so please use the concepts explained, but bare in mind the full sketch code isn't tested.
This above sketch (if it runs) should allow you to use keys to control the filter parameters (e/E to decrease/increase erosion, d/D for dilation, b/B for blur). Hopefully you'll get satisfactory results.
When working with SimpleOpenNI in general I advise recording an .oni file (check out the RecorderPlay example for that) of a person for the most common use case. This will save you some time on the long run when testing and will allow you to work remotely with the sensor detached. One thing to bare in mind, the depth resolution is reduced to half on recordings (but using a usingRecording boolean flag should keep things safe)
The last and probably most important point is about the quality of the end result. Your resulting image can't be that much better if the source image isn't easy to work with to begin with. The depth data from the original Kinect sensor isn't great. The Asus sensors feel a wee bit more stable, but still the difference is negligible in most cases. If you are going to stick to one of these sensors, make sure you've got a clear background and decent lighting (without too much direct warm light (sunlight, incandescent lightbulbs, etc.) since they may interfere with the sensor)
If you want a more accurate user cut and the above filtering doesn't get the results you're after, consider switching to a better sensor like KinectV2. The depth quality is much better and the sensor is less susceptible to direct warm light. This may mean you need to use Windows (I see there's a KinectPV2 wrapper available) or OpenFrameworks(c++ collections of libraries similar to Processing) with ofxKinectV2

I've tried built-in erode-dilate-blur in processing. But they are very inefficient. Every time I increment blurAmount in img.filter(BLUR,blurAmount), my FPS decreases by 5 frames.
So I decided to try opencv. It is much better in comparison. The result is satisfactory.
import SimpleOpenNI.*;
import processing.video.*;
import gab.opencv.*;
SimpleOpenNI context;
OpenCV opencv;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 1;
Movie mov;
void setup(){
opencv = new OpenCV(this, 640, 480);
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false) {
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640, 480, RGB);
mov = new Movie(this, "wild.mp4");
mov.play();
mov.speed(5);
mov.volume(0);
}
void movieEvent(Movie m) {
m.read();
}
void draw() {
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask.loadPixels();
for (int i = 0; i < numPixels; i++) {
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
mask.updatePixels();
opencv.loadImage(mask);
opencv.gray();
for (int i = 0; i < erodeAmt; i++) {
opencv.erode();
}
for (int i = 0; i < dilateAmt; i++) {
opencv.dilate();
}
if (blurAmt>0) {//blur with 0 amount causes error
opencv.blur(blurAmt);
}
mask = opencv.getSnapshot();
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(mov, context.depthWidth() + 10, 0);
image(rgb, context.depthWidth() + 10, 0);
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt, 15, 15);
}
void keyPressed() {
if (key == 'e') erodeAmt--;
if (key == 'E') erodeAmt++;
if (key == 'd') dilateAmt--;
if (key == 'D') dilateAmt++;
if (key == 'b') blurAmt--;
if (key == 'B') blurAmt++;
//constrain values
if (erodeAmt < 0) erodeAmt = 0;
if (dilateAmt < 0) dilateAmt = 0;
if (blurAmt < 0) blurAmt = 0;
}

Related

How can I improve my edge detection program to hit smaller details?

I have written an edge detection program in Java and it works well but it has its limits. Here is a before and after photo using it.
My method for this checks to see the differences between each pixel and if the difference is great enough it will mark it black and if it is similar enough it will mark it white.
public Image edgeDetection() {
Color[][] newImg = new Color[image.length][image[0].length];
int x = 2;//higher = less sensitive; lower = more sensitive
for (int r = 0; r < image.length; r++) {
for (int c = 0; c < image[r].length; c++) {
int red = image[r][c].getRed();
int blue = image[r][c].getBlue();
int green = image[r][c].getGreen();
if(r < image.length - 1 && c < image[r].length) {
if(image[r][c].getRed() - image[r + 1][c].getRed() > x || image[r][c].getRed() - image[r + 1][c].getRed() < -x ) {//
if(image[r][c].getBlue() - image[r + 1][c].getBlue() > x || image[r][c].getRed() - image[r + 1][c].getRed() < -x ) {
if(image[r][c].getGreen() - image[r + 1][c].getGreen() > x || image[r][c].getRed() - image[r + 1][c].getRed() < -x ) {
newImg[r][c] = new Color(0, 0, 0);
}
else {
newImg[r][c] = new Color(255, 255, 255);
}
}
else {
newImg[r][c] = new Color(255, 255, 255);
}
}
else {
newImg[r][c] = new Color(255, 255, 255);
}
}
else {
newImg[r][c] = new Color(red, green, blue);
}
}
}
return new Image (newImg);
}
Currently, I have the x value set to 2 so there are more black pixels than white, if I were to increase that number you would see the opposite. It seems there is no X value that I can set it to that would make the image look more like the original but in this edge style. Right now I am looking for advice on how to make the image look sharper despite it only being black and white. I want to capture the smaller intricacies in images like this one. If there is anything else I missed I can provide more code. Thanks!
Also, this project is just for fun so no worries if this isn't something that's possible with my code!

BufferedImage slows down performance

I'm working on a game, nothing serious, just for fun.
I wrote a class 'ImageBuilder' to help creating some images.
Everything works fine, except one thing.
I initialize a variabile like this:
// other stuff
m_tile = new ImageBuilder(TILE_SIZE, TILE_SIZE, BufferedImage.TYPE_INT_RGB).paint(0xff069dee).paintBorder(0xff4c4a4a, 1).build();
// other stuff
Then, in the rendering method, i have:
for (int x = 0; x < 16; x++) {
for (int y = 0; y < 16; y++) {
g.drawImage(m_tile, x * (TILE_SIZE + m_padding.x) + m_margin.x, y * (TILE_SIZE + m_padding.y) + m_margin.y, null);
}
}
Note: m_padding and m_margin are just two Vector2i
This draws on the screen a simple 16x16 table using that image, but the game is almost frozen, i can't get more than like 10 FPS.
I tried to creating the image without that class, by doing this (TILE_SIZE = 32):
m_tile = new BufferedImage(TILE_SIZE, TILE_SIZE, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < TILE_SIZE; x++) {
for (int y = 0; y < TILE_SIZE; y++) {
if (x == 0 || y == 0 || x + 1 == TILE_SIZE || y + 1 == TILE_SIZE)
m_tile.setRGB(x, y, 0x4c4a4a);
else
m_tile.setRGB(x, y, 0x069dee);
}
}
This time, i get 60 FPS.
I can't figure out with is the difference, i used to creating image using 'ImageBuilder' and all is fine, but not this time.
ImageBuilder class:
// Constructor
public ImageBuilder(int width, int height, int imageType) {
this.m_width = width;
this.m_height = height;
this.m_image = new BufferedImage(m_width, m_height, imageType);
this.m_pixels = ((DataBufferInt) m_image.getRaster().getDataBuffer()).getData();
this.m_image_type = imageType;
}
public ImageBuilder paint(int color) {
for (int i = 0; i < m_pixels.length; i++) m_pixels[i] = color;
return this;
}
public ImageBuilder paintBorder(int color, int stroke) {
for (int x = 0; x < m_width; x++) {
for (int y = 0; y < m_height; y++) {
if (x < stroke || y < stroke || x + stroke >= m_width || y + stroke >= m_height) {
m_pixels[x + y * m_width] = color;
}
}
}
return this;
}
public BufferedImage build() {
return m_image;
}
There are other methods, but i don't call them, so i don't think is necessary to write them
What am i doing wrong?
My guess is that the problem is your ImageBuilder accessing the backing data array of the data buffer:
this.m_pixels = ((DataBufferInt) m_image.getRaster().getDataBuffer()).getData();
Doing so, may (will) ruin the chances for this image being hardware accelerated. This is documented behaviour, from the getData() API doc:
Note that calling this method may cause this DataBuffer object to be incompatible with performance optimizations used by some implementations (such as caching an associated image in video memory).
You could probably get around this easily, by using a temporary image in your bilder, and returning a copy of the temp image from the build() method, that has not been "tampered" with.
For best performance, always using a compatible image (as in createCompatibleImage(), mentioned by #VGR in the comments) is a good idea too. This should ensure you have the fastest possible hardware blits.

How to color the rectangles that are being compared?

I am new to the processing environment, was trying to build a visualizer for bubble sort. I have some questions regarding it -
Is the visualization and drawing of the rectangles correct?
How do I color the rectangles different that are being compared currently?
Is this can be done in java using Swing or any native libraries (i.e without processing)? If yes, please provide some resources.
int totalNum = 10;
int[] values = new int[totalNum];
int i = 1;
int noOfComp = 0;
void draw() {
float rectPos = 0;
frameRate(10);
background(255);
for (int i = 0; i< totalNum; i++) {
//text(values[i], rectPos , values[i]);
stroke(220);
fill(50);
rect(rectPos, height - values[i], width / totalNum, values[i]);
rectPos += width / totalNum;
}
textSize(20);
text("No. Of Comparisons: ", 15, 40);
text(noOfComp, 80, 60);
bubbleSort();
}
void bubbleSort() {
if (i < totalNum) {
if (values[i] < values[i-1] && noOfComp++ > 0) {
fill(255,5,5);
swap(i, i-1);
delay(100);
}
i++;
} else {
i = 1;
}
}
void swap(int a, int b) {
int temp = values[a];
values[a] = values[b];
values[b] = temp;
}
void setup() {
size(700, 700);
for (int i = 0; i< totalNum; i++) {
values[i] = round(random(0, height));
}
}
Is the visualization and drawing of the rectangles correct?
That's opinion-based. But it works, so yes it is. The code is well structured and follows the basic guidelines.
How do I color the rectangles different that are being compared currently?
Yov've to set an individual color by fill() before the rectangles is draw. A color consists of a red, green and blue channel. The channels are mixed to the final color. If all 3 channels have the same scale, then the color is a gray scale color. (0, 0, 0) is black and (255, 255, 255) is white.
e.g. Color the rectangles which are compared in red and all the others in gray. The rectangles which are compared have the indices i and i-1.
Since the control variables of the for loop is also named i, this has to be changed (e.g. j):
for (int j = 0; j < totalNum; j++) {
// [...]
}
Compare the index i to the control variable j. If j==i-1 or j==i then set the red fill color (fill(255, 0, 0)) else the gray color (fill(127)):
for (int j = 0; j < totalNum; j++) {
stroke(220);
if (j==i-1 || j==i) {
fill(255, 0, 0);
} else {
fill(127);
}
rect(rectPos, height - values[j], width / totalNum, values[j]);
rectPos += width / totalNum;
}
If you just want to color the "swapped" rectangles, the you've to identify when noOfComp has changed. State the previous swap count in a variable prevNoOfComp, before bubbleSort is called. Just use a different color if the swap count has changed (if (noOfComp != prevNoOfComp && (j==i-1 || j==i))):
int noOfComp = 0;
int prevNoOfComp = 0;
void draw() {
float rectPos = 0;
frameRate(10);
background(255);
for (int j = 0; j < totalNum; j++) {
stroke(220);
if (noOfComp != prevNoOfComp && (j==i-1 || j==i)) {
fill(255, 0, 0);
} else {
fill(127);
}
rect(rectPos, height - values[j], width / totalNum, values[j]);
rectPos += width / totalNum;
}
textSize(12);
text("No. Of Comparisons: ", 15, 40);
text(noOfComp, 80, 60);
prevNoOfComp = noOfComp;
bubbleSort();
}
[...] can be done in java using Swing or any native libraries
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.

Detecting when a PNG image has been "wiped transparent" in Processing?

i'm trying to create a game using Kinect where you have to use your hand movements to wipe away an image to make it disappear revealing another image beneath it within 30 seconds. Now I have already done the code for the loosing condition where if you do not wipe away the image under 30 seconds, the loosing screen will pop up.
However, I am not sure how to code the part to detect when the entire PNG image has been "wiped away". Does this involve using get()? I am not sure how to approach this.
Imagine there are 2 Pimages moondirt.png and moonsurface.png
The Kinect controls the wiping and making Pimage moondirt.png transparent to reveal moonsurface.png
void kinect() {
//----------draw kinect------------
// Draw moon surface
image(moonSurface, 0, 0, width, height);
// Draw the moon dirt
image(moonDirt, 0, 0, width, height);
// Threshold the depth image
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < rawDepth.length; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImg.pixels[i] = color(255);
maskingImg.pixels[i] = color(255);
} else {
depthImg.pixels[i] = color(0);
}
}
//moonDirt.resize(640, 480); //(640, 480);
moonDirt.loadPixels();
for (int i=0; i < rawDepth.length; i++) {
if ( maskingImg.pixels[i] == color(255) ) {
moonDirt.pixels[i] = color( 0, 0, 0, 0 );
}
}
moonDirt.updatePixels();
image(moonDirt, 0, 0, width, height);
color c = moonDirt.get(width, height);
updatePixels();
//--------timer-----
if (countDownTimer.complete() == true){
if (timeLeft > 1 ) {
timeLeft--;
countDownTimer.start();
} else {
state = 4;
redraw();
}
}
//show countDown TIMER
String s = "Time Left: " + timeLeft;
textAlign(CENTER);
textSize(30);
fill(255,0,0);
text(s, 380, 320);
}
//timer
class Timer {
int startTime;
int interval;
Timer(int timeInterval) {
interval = timeInterval;
}
void start() {
startTime = millis();
}
boolean complete() {
int elapsedTime = millis() - startTime;
if (elapsedTime > interval) {
return true;
}else {
return false;
}
}
}
I see the confusion in this section:
moonDirt.loadPixels();
for (int i=0; i < rawDepth.length; i++) {
if ( maskingImg.pixels[i] == color(255) ) {
moonDirt.pixels[i] = color( 0, 0, 0, 0 );
}
}
moonDirt.updatePixels();
image(moonDirt, 0, 0, width, height);
color c = moonDirt.get(width, height);
You are already using pixels[] which is more efficient than get() which is great.
Don't forget to call updatePixels() when you're done. You already do that for moonDirt, but not for maskingImg
If you want to find out if an image has been cleared (where clear means transparent black (color(0,0,0,0)) in this case).
It looks like you're already familiar with functions that take parameters and return values. The count function will need to:
take 2 arguments: the image to process and the colour to check and count
return the total count
iterate through all pixels: if any pixels match the 2nd argument, the total count increments
Something like this:
/**
* countPixels - counts pixels of of a certain colour within an image
* #param image - the PImage to loop through
* #param colorToCount - the colour to count pixels present in the image
* return int - the number of found pixels (between 0 and image.pixels.length)
*/
int countPixels(PImage image,color colorToCount){
// initial transparent black pixel count
int count = 0;
// make pixels[] available
image.loadPixels();
// for each pixel
for(int i = 0 ; i < image.pixels.length; i++){
// check if it's transparent black
if(image.pixels[i] == colorToCount){
// if so, increment the counter
count++;
}
}
// finally return the count
return count;
}
Within your code you could use it like so:
...
// Threshold the depth image
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < rawDepth.length; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImg.pixels[i] = color(255);
maskingImg.pixels[i] = color(255);
} else {
depthImg.pixels[i] = color(0);
}
}
maskingImg.updatePixels();
//moonDirt.resize(640, 480); //(640, 480);
moonDirt.loadPixels();
for (int i=0; i < rawDepth.length; i++) {
if ( maskingImg.pixels[i] == color(255) ) {
moonDirt.pixels[i] = color( 0, 0, 0, 0 );
}
}
moonDirt.updatePixels();
image(moonDirt, 0, 0, width, height);
int leftToReveal = moonDirt.pixels.length;
int revealedPixels = countPixels(moonDirt,color(0,0,0,0));
int percentageClear = round(((float)revealedPixels / leftToReveal) * 100);
println("revealed " + revealedPixels + " of " + leftToReveal + " pixels -> ~" + percentageClear + "% cleared");
...
You have the option to set the condition for all pixels to be cleared or a ratio/percentage (e.g. if more 90% is clear, that's good enough) to then change the game state accordingly.

Java/Image: How to make adjacent background pixels transparent?

There are a lot of questions about how to make the background-color of an image transparent, but all the anwers seem to use an RgbImageFilter to make every occurrence of a specific color transparent.
My question is, how would I implement this "background removal" in Java, so that it floods transparency from a fixed point (as per the "bucket" operation in Paint, or the RMagick function Image#matte_floodfill)?
As is the way with the Internet, I wound up on this page after a bit of searching trying to find some code that did something similar.
Here's my knocked-together solution. It's not perfect but it's perhaps a starting point for someone else trying to do it.
This works by choosing the four corners of the image, averaging them, and using that as the anchor colour. I use a Pixel class for what seemed like convenience initially and ended up wasting my time! Hah. As is the way.
public class Pixel implements Comparable{
int x,y;
public Pixel(int x, int y){
this.x = x;
this.y = y;
}
#Override
public int compareTo(Object arg0) {
Pixel p = (Pixel) arg0;
if(p.x == x && p.y == y)
return 0;
return -1;
}
}
And here's the beef:
public class ImageGrab {
private static int pixelSimilarityLimit = 20;
public static void main(String[] args){
BufferedImage image = null;
try {
URL url = new URL("http://animal-photography.com/thumbs/russian_blue_cat_side_view_on_~AP-0PR4DL-TH.jpg");
image = ImageIO.read(url);
} catch (IOException e) {
e.printStackTrace();
}
Color[] corners = new Color[]{new Color(image.getRGB(0, 0)),
new Color(image.getRGB(image.getWidth()-1, 0)),
new Color(image.getRGB(0, image.getHeight()-1)),
new Color(image.getRGB(image.getWidth()-1, image.getHeight()-1))};
int avr = 0, avb=0, avg=0, ava=0;
for(Color c : corners){
avr += c.getRed();
avb += c.getBlue();
avg += c.getGreen();
ava += c.getAlpha();
}
System.out.println(avr/4+","+avg/4+","+avb/4+","+ava/4);
for(Color c : corners){
if(Math.abs(c.getRed() - avr/4) < pixelSimilarityLimit &&
Math.abs(c.getBlue() - avb/4) < pixelSimilarityLimit &&
Math.abs(c.getGreen() - avg/4) < pixelSimilarityLimit &&
Math.abs(c.getAlpha() - ava/4) < pixelSimilarityLimit){
}
else{
return;
}
}
Color master = new Color(avr/4, avg/4, avb/4, ava/4);
System.out.println("Image sufficiently bordered.");
LinkedList<Pixel> open = new LinkedList<Pixel>();
LinkedList<Pixel> closed = new LinkedList<Pixel>();
open.add(new Pixel(0,0));
open.add(new Pixel(0,image.getHeight()-1));
open.add(new Pixel(image.getWidth()-1,0));
open.add(new Pixel(image.getWidth()-1,image.getHeight()-1));
while(open.size() > 0){
Pixel p = open.removeFirst();
closed.add(p);
for(int i=-1; i<2; i++){
for(int j=-1; j<2; j++){
if(i == 0 && j == 0)
continue;
if(p.x+i < 0 || p.x+i >= image.getWidth() || p.y+j < 0 || p.y+j >= image.getHeight())
continue;
Pixel thisPoint = new Pixel(p.x+i, p.y+j); boolean add = true;
for(Pixel pp : open)
if(thisPoint.x == pp.x && thisPoint.y == pp.y)
add = false;
for(Pixel pp : closed)
if(thisPoint.x == pp.x && thisPoint.y == pp.y)
add = false;
if(add && areSimilar(master,new Color(image.getRGB(p.x+i, p.y+j)))){
open.add(thisPoint);
}
}
}
}
for(Pixel p : closed){
Color c = new Color(image.getRGB(p.x, p.y));
Color newC = new Color(0, 0, 0, 0);
image.setRGB(p.x, p.y, newC.getRGB());
}
try {
File outputfile = new File("C:/Users/Mike/Desktop/saved.png");
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
}
}
public static boolean areSimilar(Color c, Color d){
if(Math.abs(c.getRed() - d.getRed()) < pixelSimilarityLimit &&
Math.abs(c.getBlue() - d.getBlue()) < pixelSimilarityLimit &&
Math.abs(c.getGreen() - d.getGreen()) < pixelSimilarityLimit &&
Math.abs(c.getAlpha() - d.getAlpha()) < pixelSimilarityLimit){
return true;
}
else{
return false;
}
}
}
In case anyone's worried, consider this public domain. Cheers! Hope it helps.
An unsatisfactory solution that I'm currently using is simply anticipating the background color that you're going to place your transparent image against (as you usually will do this) and using the solution with an RgbImageFilter as described here.
If someone wants to post a satisfactory solution, please do - until then, I'm going to accept this, as it works.
Here is something that I just put together to remove the background from a BufferedImage. It is pretty simple but there may be more efficient ways of doing it.
I have it set up with three inputs (a source image, the tolerance allowed, and the color that you want to replace the background with). It simply returns a buffered image with the changes made to it.
It finds the color near each corner and averages them to create a reference color then it replaces each pixel that is within the tolerance range of the reference.
In order the make the background transparent you would need to pass in
BufferedImage RemoveBackground(BufferedImage src, float tol, int color)
{
BufferedImage dest = src;
int h = dest.getHeight();
int w = dest.getWidth();
int refCol = -(dest.getRGB(2,2)+dest.getRGB(w-2,2)+dest.getRGB(2,h-2)+dest.getRGB(w-2,h-2))/4;
int Col = 0;
int x = 1;
int y = 1;
int upperBound = (int)(refCol*(1+tol));
int lowerBound = (int)(refCol*(1-tol));
while (x < w)
{
y = 1;
while (y < h)
{
Col = -dest.getRGB(x,y);
if (Col > lowerBound && Col < upperBound)
{
dest.setRGB(x,y,color);
}
y++;
}
x++;
}
return dest;
}
I know this is an old thread but hopefully this will come in handy for someone.
Edit: I just realized that this does not work for transparencies, just for replacing a RGB value with another RGB value. It would need a little adaptation to do ARGB values.

Categories

Resources