I'm using the barcode-reader example from Google's Android Vision API.
The preview size doesn't seem to fill up the whole space available (I'm using a Nexus 4 and there is a white unused space to the right of preview, about 1/3 of the width).
I would like to be able to run this example on various devices and always have it fill up the whole space available.
So the bit I've been playing with is:
CameraSource.Builder builder = new CameraSource.Builder(getApplicationContext(), barcodeDetector).setFacing(CameraSource.CAMERA_FACING_BACK).setRequestedPreviewSize(?, ?).setRequestedFps(15.0f);
Any ideas?
Thanks!
just remove or comment below code from CameraSourcePreview class
if (childHeight > layoutHeight) {
childHeight = layoutHeight;
childWidth = (int)(((float) layoutHeight / (float) height) * width);
}
and use layoutHeight instead of childHeight of "CameraSourcePreview" class in this loop - for (int i = 0; i < getChildCount(); ++i){...}
if (mCameraSource != null)
{
Size size = mCameraSource.getPreviewSize();
if (size != null)
{
width = size.getWidth();
height = size.getHeight();
}
}
// Swap width and height sizes when in portrait, since it will be rotated 90 degrees
if (isPortraitMode())
{
int tmp = width;
//noinspection SuspiciousNameCombination
width = height;
height = tmp;
}
final int layoutWidth = right - left;
final int layoutHeight = bottom - top;
// Computes height and width for potentially doing fit width.
int childWidth = layoutWidth;
int childHeight = (int) (((float) layoutWidth / (float) width) * height);
for (int i = 0; i < getChildCount(); ++i)
{
getChildAt(i).layout(0, 0, childWidth, layoutHeight);
}
try
{
startIfReady();
}
catch (SecurityException se)
{
Log.e(TAG, "Do not have permission to start the camera", se);
}
catch (IOException e)
{
Log.e(TAG, "Could not start camera source.", e);
}
}
There are two ways to make the camera image fill the entire screen.
Akesh Dubey's answer, which displays the whole image by stretching it to fit the layout's width and height. However, aspect ratio is not preserved.
My answer below, which crops the image to make it fit without sacrificing aspect ratio.
In order to crop-fit the image, all you have to do is change one > into a <. Find the below if-statement and change the condition like so:
if (childHeight < layoutHeight) {
childHeight = layoutHeight;
childWidth = (int)(((float) layoutHeight / (float) height) * width);
}
Related
I am trying to create custom seek bar in android with multicolor. I tried below code
customseekbar.java
int proBarWidth = getWidth();
int proBarHeight = getHeight();
int thumboffset = getThumbOffset();
int lastproX = 0;
int proItemWidth, proItemRight;
for (int i = 0; i < mproItemsList.size(); i++) {
proItem proItem = mproItemsList.get(i);
Paint proPaint = new Paint();
proPaint.setColor(getResources().getColor(proItem.color));
proItemWidth = (int) (proItem.proItemPercentage
* proBarWidth / 100);
proItemRight = lastproX + proItemWidth;
// for last item give right of the pro item to width of the
// pro bar
if (i == mproItemsList.size() - 1
&& proItemRight != proBarWidth) {
proItemRight = proBarWidth;
}
Rect proRect = new Rect();
proRect.set(lastproX, thumboffset / 2, proItemRight,
proBarHeight - thumboffset / 2);
canvas.drawRect(proRect, proPaint);
lastproX = proItemRight;
}
super.onDraw(canvas);
view
<mypackage.customseekbar
android:id="#+id/customseekbar"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:max="100"
android:progress="0"
android:progressDrawable="#android:color/transparent"
android:thumb="#drawable/seek_thumb_normal"
android:thumbOffset="12dp" />
I used this method in MainMenuActivity. It gives me result something like below. I referred this link https://azzits.wordpress.com/2013/11/17/customseekbar/
But I am expecting something like below
Is there any way to draw this vertical gapped lines? How can I draw this vertical lines?
Progress bar with divider is a good place to look as mentioned by #Nilesh Rathod. Rather than using canvas.drawRect() you can use canvas.drawRoundRect(); Short example:
for (int i = 0; i < NUM_SEGMENTS; i++) {
float loLevel = i / (float) NUM_SEGMENTS;
float hiLevel = (i + 1) / (float) NUM_SEGMENTS;
if (loLevel <= level && level <= hiLevel) {
float middle = mSegment.left + NUM_SEGMENTS * segmentWidth * (level - loLevel);
canvas.drawRoundRect(mSegment.left, mSegment.top, middle, mSegment.bottom, mPaint);
mPaint.setColor(mBackground);
canvas.drawRoundRect(middle, mSegment.top, mSegment.right, mSegment.bottom, mPaint);
} else {
canvas.drawRoundRect(mSegment, mPaint);
}
mSegment.offset(mSegment.width() + gapWidth, 0);
}
I give full credit to the code above to the creator of the aforementioned link and don't claim any of it as mine as I am just illustrating the change that should be made to reach the desired effect. Please let me know if you run into further issues.
I am working with the Google sample project, but I cannot seem to get the preview to work without stretching it.
public void setAspectRatio(int width, int height) {
if (width < 0 || height < 0)
{
throw new IllegalArgumentException("Size cannot be negative.");
}
mRatioWidth = width;
mRatioHeight = height;
requestLayout();
}
I have tried physically changing the aspect ration on the AutoFitTextureView class, this makes it full screen, but causes it to stretch.
Has anyone figured out a successful implementation of this ?
you need to modify the setUpCameraOutputs method. Modify the following line
previously--->
Size largest = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)),
new CompareSizesByArea());
modified--->
largest =getFullScreenPreview(map.getOutputSizes(ImageFormat.JPEG),width,height);
previously--->
mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
maxPreviewHeight, largest);
modified---->
mPreviewSize = getFullScreenPreview(map.getOutputSizes(SurfaceTexture.class),
width, height);
and method for getting fullscreen preview is as follows-
private Size getFullScreenPreview(Size[] outputSizes, int width, int height) {
List<Size> outputSizeList = Arrays.asList(outputSizes);
outputSizeList = sortListInDescendingOrder(outputSizeList); //because in some phones available list is in ascending order
Size fullScreenSize = outputSizeList.get(0);
for (int i = 0; i < outputSizeList.size(); i++) {
int orginalWidth = outputSizeList.get(i).getWidth();
int orginalHeight = outputSizeList.get(i).getHeight();
float orginalRatio = (float) orginalWidth / (float) orginalHeight;
float requiredRatio;
if (width > height) {
requiredRatio = ((float) width / height); //for landscape mode
if ((outputSizeList.get(i).getWidth() > width && outputSizeList.get(i).getHeight() > height)) {
// because if we select preview size hire than device display resolution it may fail to create capture request
continue;
}
} else {
requiredRatio = 1 / ((float) width / height); //for portrait mode
if ((outputSizeList.get(i).getWidth() > height && outputSizeList.get(i).getHeight() > width)) {
// because if we select preview size hire than device display resolution it may fail to create capture request
continue;
}
}
if (orginalRatio == requiredRatio) {
fullScreenSize = outputSizeList.get(i);
break;
}
}
return fullScreenSize;
}
Using some math, i created the following java-function, to input a Bitmap, and have it crop out a centered square in which a circle is cropped out again with a black border around it.
The rest of the square should be transparent.
Additionatly, there is a transparent distance to the sides to not damage the preview when sending the image via Messengers.
The code of my function is as following:
public static Bitmap edit_image(Bitmap src,boolean makeborder) {
int width = src.getWidth();
int height = src.getHeight();
int A, R, G, B;
int pixel;
int middlex = width/2;
int middley = height/2;
int seitenlaenge,startx,starty;
if(width>height)
{
seitenlaenge=height;
starty=0;
startx = middlex - (seitenlaenge/2);
}
else
{
seitenlaenge=width;
startx=0;
starty = middley - (seitenlaenge/2);
}
int kreisradius = seitenlaenge/2;
int mittx = startx + kreisradius;
int mitty = starty + kreisradius;
int border=2;
int seitenabstand=55;
Bitmap bmOut = Bitmap.createBitmap(seitenlaenge+seitenabstand, seitenlaenge+seitenabstand, Bitmap.Config.ARGB_8888);
bmOut.setHasAlpha(true);
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
int distzumitte = (int) (Math.pow(mittx-x,2) + Math.pow(mitty-y,2)); // (Xm-Xp)^2 + (Ym-Yp)^2 = dist^2
distzumitte = (int) Math.sqrt(distzumitte);
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
R = (int)Color.red(pixel);
G = (int)Color.green(pixel);
B = (int)Color.blue(pixel);
int color = Color.argb(A, R, G, B);
int afterx=x-startx+(seitenabstand/2);
int aftery=y-starty+(seitenabstand/2);
if(x < startx || y < starty || afterx>=seitenlaenge+seitenabstand || aftery>=seitenlaenge+seitenabstand) //seitenrand
{
continue;
}
else if(distzumitte > kreisradius)
{
color=0x00FFFFFF;
}
else if(distzumitte > kreisradius-border && makeborder) //border
{
color = Color.argb(A, 0, 0, 0);
}
bmOut.setPixel(afterx, aftery, color);
}
}
return bmOut;
}
This function works fine, but there are some problems occuring that i wasn't able to resolve yet.
The quality of the image is decreased significantly
The border is not really round, but appears to be flat at the edges of the image (on some devices?!)
I'd appreciate any help regarding that problems. I got to admit that i'm not the best in math and there should probably be a better formula to ceate the border.
your source code is hard to read, since it is a mix of German and English in the variable names. Additionally you don't say which image library you use, so we don't exactly know where the classes Bitmap and Color come from.
Anyway, it is very obvious, that you are operating only on a Bitmap. Bitmap means the whole image is stored in the RAM pixel by pixel. There is no lossy compression. I don't see anything in your source code, that can affect the quality of the image.
It is very likely, that the answer is in the Code that you don't show us. Additionally, what you describe (botrh of the problems) sounds like a very typical low quality JPEG compression. I am sure, somewhere after you call you function, you convert/save the image to a JPEG. Try to do that at that position to BMP, TIFF or PNG and see that the error disappears magically. Maybe you can also set the quality level of the JPEG somewhere to avoid that.
To make it easier for others (maybe) also to find a good answer, please allow me to translate your code to English:
public static Bitmap edit_image(Bitmap src,boolean makeborder) {
int width = src.getWidth();
int height = src.getHeight();
int A, R, G, B;
int pixel;
int middlex = width/2;
int middley = height/2;
int sideLength,startx,starty;
if(width>height)
{
sideLength=height;
starty=0;
startx = middlex - (sideLength/2);
}
else
{
sideLength=width;
startx=0;
starty = middley - (sideLength/2);
}
int circleRadius = sideLength/2;
int middleX = startx + circleRadius;
int middleY = starty + circleRadius;
int border=2;
int sideDistance=55;
Bitmap bmOut = Bitmap.createBitmap(sideLength+sideDistance, sideLength+sideDistance, Bitmap.Config.ARGB_8888);
bmOut.setHasAlpha(true);
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
int distanceToMiddle = (int) (Math.pow(middleX-x,2) + Math.pow(middleY-y,2)); // (Xm-Xp)^2 + (Ym-Yp)^2 = dist^2
distanceToMiddle = (int) Math.sqrt(distanceToMiddle);
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
R = (int)Color.red(pixel);
G = (int)Color.green(pixel);
B = (int)Color.blue(pixel);
int color = Color.argb(A, R, G, B);
int afterx=x-startx+(sideDistance/2);
int aftery=y-starty+(sideDistance/2);
if(x < startx || y < starty || afterx>=sideLength+sideDistance || aftery>=sideLength+sideDistance) //margin
{
continue;
}
else if(distanceToMiddle > circleRadius)
{
color=0x00FFFFFF;
}
else if(distanceToMiddle > circleRadius-border && makeborder) //border
{
color = Color.argb(A, 0, 0, 0);
}
bmOut.setPixel(afterx, aftery, color);
}
}
return bmOut;
}
I think that you need to check PorterDuffXferMode.
You will find some technical informations about compositing images modes HERE.
There is some good example of making bitmap with rounded edges HERE. You just need to tweak a bit source code and you're ready to go...
Hope it will help.
Regarding the quality I can't see anything wrong with your method. Running the code with Java Swing no quality is lost. The only problem is that the image has aliased edges.
The aliasing problem will tend to disappear as the screen resolution increases and would be more noticeable for lower resolutions. This might explain why you see it in some devices only.The same problem applies to your border but in that case it would be more noticable since the color is single black.
Your algorithm defines a square area of the original image. To find the square it starts from the image's center and expand to either the width or the height of the image whichever is smaller. I am referring to this area as the square.
The aliasing is caused by your code that sets the colors (I am using pseudo-code):
if ( outOfSquare() ) {
continue; // case 1: this works but you depend upon the new image' s default pixel value i.e. transparent black
} else if ( insideSquare() && ! insideCircle() ) {
color = 0x00FFFFFF; // case 2: transparent white. <- Redundant
} else if ( insideBorder() ) {
color = Color.argb(A, 0, 0, 0); // case 3: Black color using the transparency of the original image.
} else { // inside the inner circle
// case 4: leave image color
}
Some notes about the code:
Case 1 depends upon the default pixel value of the original image i.e. transparent black. It works but better to set it explicitly
Case 2 is redundant. Handle it in the same way you handle case 1. We are only interested in what happens inside the circle.
Case 3 (when you draw the border) is not clear what it expects. Using the alpha of the original image has the potential of messing up your new image if it happens that the original alpha varies along the circle's edges. So this is clearly wrong and depending on the image, can potentially be another cause of your problems.
Case 4 is ok.
Now at your circle's periphery the following color transitions take place:
If border is not used: full transparency -> full image color (case 2 and 4 in the pseudocode)
If border is used: full transparency -> full black -> full image color (cases 2, 3 and 4)
To achieve a better quality at the edges you need to introduce some intermediate states that would make the transitions smoother (the new transitions are shown in italics):
Border is not used: full transparency -> partial transparency with image color -> full image color
Border is used: full transparency -> partial transparency of Black color -> full Black color -> partial transparency of Black color + Image color (i.e. blending) -> Full image color
I hope that helps
I'm trying to build a program which can remove an single-colored border form an image.
The border is always white but the width of the border on the left and right side might differ from the width of the border at the top and bottom. So the image I want to extract is centered within the source image.
So from the following image I want to extract the green rectangle.
At the moment I don't know how to start solving this problem.
UPDATE
So finally calsign's code snippet and some improvements on it, solves my problem. I realized that the border around the inner image may not be completely single colored but can vary slightly. This leads to the behavior that for some images were left with a small border.
I solved this problem by improving the comparison of the color of two pixels by comparing the color distance of the two colors with a threshold. When the distance is below the threshold then the colors are handled as equally.
public Bitmap cropBorderFromBitmap(Bitmap bmp) {
//Convenience variables
int width = bmp.getWidth();
int height = bmp.getHeight();
int[] pixels = new int[height * width];
//Load the pixel data into the pixels array
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
int length = pixels.length;
int borderColor = pixels[0];
//Locate the start of the border
int borderStart = 0;
for(int i = 0; i < length; i ++) {
// 1. Compare the color of two pixels whether they differ
// 2. Check whether the difference is significant
if(pixels[i] != borderColor && !sameColor(borderColor, pixels[i])) {
Log.i(TAG,"Current Color: " + pixels[i]);
borderStart = i;
break;
}
}
//Locate the end of the border
int borderEnd = 0;
for(int i = length - 1; i >= 0; i --) {
if(pixels[i] != borderColor && !sameColor(borderColor, pixels[i])) {
Log.i(TAG,"Current Color: " + pixels[i]);
borderEnd = length - i;
break;
}
}
//Calculate the margins
int leftMargin = borderStart % width;
int rightMargin = borderEnd % width;
int topMargin = borderStart / width;
int bottomMargin = borderEnd / width;
//Create the new, cropped version of the Bitmap
bmp = Bitmap.createBitmap(bmp, leftMargin, topMargin, width - leftMargin - rightMargin, height - topMargin - bottomMargin);
return bmp;
}
private boolean sameColor(int color1, int color2){
// Split colors into RGB values
long r1 = (color1)&0xFF;
long g1 = (color1 >>8)&0xFF;
long b1 = (color1 >>16)&0xFF;
long r2 = (color2)&0xFF;
long g2 = (color2 >>8)&0xFF;
long b2 = (color2 >>16)&0xFF;
long dist = (r2 - r1) * (r2 - r1) + (g2 - g1) * (g2 - g1) + (b2 - b1) *(b2 - b1);
// Check vs. threshold
return dist < 200;
}
Perhaps not the best use of the APIs to find a solution, but the one that came to mind: directly modify the image's pixels.
You can get a Bitmap's pixels with getPixels() and then create a new, cropped Bitmap with createBitmap(). Then, it's just a matter of finding the dimensions of the border.
You can find the color of the border by accessing the pixel located at position 0, and then compare that value (an int) to the value of each proceeding pixel until your reach the border (the pixel that isn't that color). With a little bit of math, it can be done.
Here is some simple code that demonstrates the point:
private void cropBorderFromBitmap(Bitmap bmp) {
int[] pixels;
//Load the pixel data into the pixels array
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
//Convenience variables
int width = bmp.getWidth();
int height = bmp.getHeight();
int length = pixels.length;
int borderColor = pixels[0];
//Locate the start of the border
int borderStart;
for(int i = 0; i < length; i ++) {
if(pixels[i] != borderColor) {
borderStart = i;
break;
}
}
//Locate the end of the border
int borderEnd;
for(int i = length - 1; i >= 0; i --) {
if(pixels[i] != borderColor) {
borderEnd = length - i;
break;
}
}
//Calculate the margins
int leftMargin = borderStart % width;
int rightMargin = borderEnd % width;
int topMargin = borderStart / width;
int bottomMargin = borderEnd / width;
//Create the new, cropped version of the Bitmap
bmp = createBitmap(bmp, leftMargin, topMargin, width - leftMargin - rightMargin, height - topMargin - bottomMargin);
}
This is untested and lacks error checking (e.g., what if the width is 0?), but it should serve as a proof-of-concept.
EDIT: I just realized that I failed to complete the getPixels() method. The wonders of testing your code... it's fixed now.
If the frame around your picture is uniform then all you need to do is investigate when the pixels in the image change.
But first thing's first - you need to have a BufferedImage object to work with. It's a class that allows you to traverse the bitmap of an image (http://docs.oracle.com/javase/6/docs/api/java/awt/image/BufferedImage.html).
If you have the image saved as a file you need to call this method:
BufferedImage bimage = ImageIO.read(new File(file));
Now you can fetch the bitmap array from the bimage:
bimage.getRGB(int startX, int startY, int w, int h, int[] rgbArray, int offset, int scansize)
like this:
int[] rgb = bimage.getRGB(0, 0, bimage.getWidth(), bimage.getHeight(), null, 0, bimage.getWidth());
There could be some issues here with ColorModel so be sure to read up on your documentation of how to fetch the appropriate rgb from different file types.
Now that you have the rgb array you should start searching how far the frame stretches out from the middle of the picture. Keep in mind that this a single dimensional array - all the lines are written here sequentially one after another - as if you sliced the picture into lines 1pixel heigh and glued them together to form one long line.
This actually works to our advantage because the first different pixel we encounter in this table will work as a great reference point.
So now we just do something like this:
int pixel1=0,pixel2=0, i=0;
while(pixel1==pixel2 && i<bimage.getWidth()*bimage.getHeight()){
pixel1=pixel2;
pixel2=rgb[i++];
}
So now if the frame of your image is uniform, the top offset is the same as the bottom offset and the left offset is the same as the right offset then the number in the variable i is very likely to be the first pixel in the green rectangle.
In order to know which row and which column it is you need the following code:
int row= i%bimage.getWidth();
int column= i - row*bimage.getWidth();
Now the problem is that you may have an image embedded in the frame that in it's left upper corner is of the same color as the frame - so for example an image of a green rectangle with white corners in a white frame. Is this the case?
You can use the public int getPixel (int x, int y) function which return for every pixel its color
It should be easy to run through the border lines and verify that the color is still the same
This is my solution:
private Bitmap cropBorderFromBitmap(Bitmap bmp) {
final int borderWidth = 10; //preserved border width
final int borderColor = -1; //WHITE
int width = bmp.getWidth();
int height = bmp.getHeight();
int[] pixels = new int[width * height];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
int minX = -1;
int minY = -1;
int maxX = -1;
int maxY = -1;
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
if(bmp.getPixel(x,y) != borderColor) {
minX = (minX == -1) ? x : Math.min(x, minX);
minY = (minY == -1) ? y : Math.min(y, minY);
maxX = (maxX == -1) ? x : Math.max(x, maxX);
maxY = (maxY == -1) ? y : Math.max(y, maxY);
}
}
}
minX = Math.max(0, minX - borderWidth);
maxX = Math.min(width, maxX + borderWidth);
minY = Math.max(0, minY - borderWidth);
maxY = Math.min(height, maxY + borderWidth);
//Create the new, cropped version of the Bitmap
return Bitmap.createBitmap(bmp, minX, minY, maxX - minX, maxY-minY);
}
How to convert a white background of an image into a transparent background? Can anyone tel me how to do this?
The first result from Google is this:
Make a color transparent
http://www.rgagnon.com/javadetails/java-0265.html
It makes the Blue part of an image transparent, but I'm sure you can adapt that to use White intstead
(hint: Pass Color.WHITE to the makeColorTransparent function, instead of Color.BLUE)
Found a more complete and modern answer here: How to make a color transparent in a BufferedImage and save as PNG
This method will make background transparent. You need to pass the image you want to modify, colour, and tolerance.
final int color = ret.getRGB(0, 0);
final Image imageWithTransparency = makeColorTransparent(ret, new Color(color), 10);
final BufferedImage transparentImage = imageToBufferedImage(imageWithTransparency);
private static BufferedImage imageToBufferedImage(final Image image) {
final BufferedImage bufferedImage =
new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_INT_ARGB);
final Graphics2D g2 = bufferedImage.createGraphics();
g2.drawImage(image, 0, 0, null);
g2.dispose();
return bufferedImage;
}
private static Image makeColorTransparent(final BufferedImage im, final Color color, int tolerance) {
int temp = 0;
if (tolerance < 0 || tolerance > 100) {
System.err.println("The tolerance is a percentage, so the value has to be between 0 and 100.");
temp = 0;
} else {
temp = tolerance * (0xFF000000 | 0xFF000000) / 100;
}
final int toleranceRGB = Math.abs(temp);
final ImageFilter filter = new RGBImageFilter() {
// The color we are looking for (white)... Alpha bits are set to opaque
public int markerRGBFrom = (color.getRGB() | 0xFF000000) - toleranceRGB;
public int markerRGBTo = (color.getRGB() | 0xFF000000) + toleranceRGB;
public final int filterRGB(final int x, final int y, final int rgb) {
if ((rgb | 0xFF000000) >= markerRGBFrom && (rgb | 0xFF000000) <= markerRGBTo) {
// Mark the alpha bits as zero - transparent
return 0x00FFFFFF & rgb;
} else {
// Nothing to do
return rgb;
}
}
};
final ImageProducer ip = new FilteredImageSource(im.getSource(), filter);
return Toolkit.getDefaultToolkit().createImage(ip);
}
Here is my solution. This filter will remove the background from any image as long as the background image color is in the top left corner.
private static class BackgroundFilter extends RGBImageFilter{
boolean setUp = false;
int bgColor;
#Override
public int filterRGB(int x, int y, int rgb) {
int colorWOAlpha = rgb & 0xFFFFFF;
if( ! setUp && x == 0 && y == 0 ){
bgColor = colorWOAlpha;
setUp = true;
}
else if( colorWOAlpha == bgColor )
return colorWOAlpha;
return rgb;
}
}
Elsewhere...
ImageFilter bgFilter = new BackgroundFilter();
ImageProducer ip = new FilteredImageSource(image.getSource(), bgFilter);
image = Toolkit.getDefaultToolkit().createImage(ip);
I am aware that this question is over a decade old and that some answers have already been given. However, none of them is satisfactory if the pixels inside the image are the same color as the background. Let's take a practical example. Given these images:
both have a white background, but the white color is also inside the image to be cutout. In other words, the white pixels on the outside of the two pennants must become transparent, the ones on the inside must remain as they are. Add to this the complication that the white of the background is not perfectly white (due to jpeg compression), so a tolerance is needed. The issue can be made more complex by figures that are not only convex, but also concave.
I created an algorithm in Java that solves the problem very well, I tested it with the two figures shown here. The following code refers to the Java API of Codename One (https://www.codenameone.com/javadoc/), but can be repurposed to the Java SE API or implemented in other languages. The important thing is to understand the rationale.
/**
* Given an image with no transparency, it makes the white background
* transparent, provided that the entire image outline has a different color
* from the background; the internal pixels of the image, even if they have
* the same color as the background, are not changed.
*
* #param source image with a white background; the image must have an
* outline of a different color from background.
* #return a new image with a transparent background
*/
public static Image makeBackgroundTransparent(Image source) {
/*
* Algorithm
*
* Pixels must be iterated in the four possible directions: (1) left to
* right, for each row (top to bottom); (2) from right to left, for each
* row (from top to bottom); (3) from top to bottom, for each column
* (from left to right); (4) from bottom to top, for each column (from
* left to right).
*
* In each iteration, each white pixel is replaced with a transparent
* one. Each iteration ends when a pixel of color other than white (or
* a transparent pixel) is encountered.
*/
if (source == null) {
throw new IllegalArgumentException("ImageUtilities.makeBackgroundTransparent -> null source image");
}
if (source instanceof FontImage) {
source = ((FontImage) source).toImage();
}
int[] pixels = source.getRGB(); // array instance containing the ARGB data within this image
int width = source.getWidth();
int height = source.getHeight();
int tolerance = 1000000; // value chosen through several attempts
// check if the first pixel is transparent
if ((pixels[0] >> 24) == 0x00) {
return source; // nothing to do, the image already has a transparent background
}
Log.p("Converting white background to transparent...", Log.DEBUG);
// 1. Left to right, for each row (top to bottom)
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int color = pixels[y * width + x];
if ((color >> 24) != 0x00 && color >= ColorUtil.WHITE - tolerance && color <= ColorUtil.WHITE + tolerance) { // means white with tolerance and no transparency
pixels[y * width + x] = 0x00; // means full transparency
} else {
break;
}
}
}
// 2. Right to left, for each row (top to bottom)
for (int y = 0; y < height; y++) {
for (int x = width - 1; x >= 0; x--) {
int color = pixels[y * width + x];
if ((color >> 24) != 0x00 && color >= ColorUtil.WHITE - tolerance && color <= ColorUtil.WHITE + tolerance) { // means white with tolerance and no transparency
pixels[y * width + x] = 0x00; // means full transparency
} else {
break;
}
}
}
// 3. Top to bottom, for each column (from left to right)
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int color = pixels[y * width + x];
if ((color >> 24) != 0x00 && color >= ColorUtil.WHITE - tolerance && color <= ColorUtil.WHITE + tolerance) { // means white with tolerance and no transparency
pixels[y * width + x] = 0x00; // means full transparency
} else {
break;
}
}
}
// 4. Bottom to top, for each column (from left to right)
for (int x = 0; x < width; x++) {
for (int y = height - 1; y >= 0; y--) {
int color = pixels[y * width + x];
if ((color >> 24) != 0x00 && color >= ColorUtil.WHITE - tolerance && color <= ColorUtil.WHITE + tolerance) { // means white with tolerance and no transparency
pixels[y * width + x] = 0x00; // means full transparency
} else {
break;
}
}
}
return EncodedImage.createFromRGB(pixels, width, height, false);
}