I have these code to reduce image noise:
for (int x = 0; x < bitmap.getWidth(); x++) {
for (int y = 0; y < bitmap.getHeight(); y++) {
// get one pixel color
int pixel = processedBitmap.getPixel(x, y);
// retrieve color of RGB
int R = Color.red(pixel);
int G = Color.green(pixel);
int B = Color.blue(pixel);
// convert into single value
R = G = B = (int) (0.299 * R + 0.587 * G + 0.114 * B);
// convert to black and white + remove noise
if (R > 162 && G > 162 && B > 162)
bitmap.setPixel(x, y, Color.WHITE);
else if (R < 162 && G < 162 && B < 162)
bitmap.setPixel(x, y, Color.BLACK);
}
}
But the time takes very long to generate the outcome. Is there any other way to optimize these code to make it faster?
Don't use getPixel. Get the image data as an array and use math to access the correct pixel. Write the math such that the fewest multiplications possible are used. Same for setPixel.
Don't use Color.red(), Color.green(), etc. Use masking, its more efficient than a function call.
Even better, drop into the NDK and do this in C. Image manipulation in Java is generally less than optimal.
Related
I want to convert a buffered image from RGBA format to CYMK format without using auto conversion tools or libraries,so i tried to extract the RGBA values from individual pixels that i got using BufferedImage.getRGB() and here what I've done so far :
BufferedImage img = new BufferedImage("image path")
int R,G,B,pixel,A;
float Rc,Gc,Bc,K,C,M,Y;
int height = img.getHeight();
int width = img.getWidth();
for(int y = 0 ; y < height ; y++){
for(int x = 0 ; x < width ; x++){
pixel = img.getRGB(x, y);
//I shifted the int bytes to get RGBA values
A = (pixel>>24)&0xff;
R = (pixel>>16)&0xff;
G = (pixel>>8)&0xff;
B = (pixel)&0xff;
Rc = (float) ((float)R/255.0);
Gc = (float) ((float)G/255.0);
Bc = (float) ((float)B/255.0);
// Equations i found on the internet to get CYMK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
C = (1- Rc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
M = (1- Gc - K)/(1-K);
}
}
Now after I've extracted it ,i want to draw or construct an image using theses values ,can you tell me of a method or a way to do this because i don't thinkBufferedImage.setRGB() would work ,and also when i printed the values of C,Y,M some of them hadNaN value can someone tell me what that means and how to deal with it ?
While it is possible, converting RGB to CMYK without a proper color profile will not produce the best results. For better performance and higher color fidelity, I really recommend using an ICC color profile (see ICC_Profile and ICC_ColorSpace classes) and ColorConvertOp. :-)
Anyway, here's how to do it using your own conversion. The important part is creating a CMYK color space, and a ColorModel and BufferedImage using that color space (you could also load a CMYK color space from an ICC profile as mentioned above, but the colors would probably look more off, as it uses different calculations than you do).
public static void main(String[] args) throws IOException {
BufferedImage img = ImageIO.read(new File(args[0]));
int height = img.getHeight();
int width = img.getWidth();
// Create a color model and image in CMYK color space (see custom class below)
ComponentColorModel cmykModel = new ComponentColorModel(CMYKColorSpace.INSTANCE, false, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage cmykImg = new BufferedImage(cmykModel, cmykModel.createCompatibleWritableRaster(width, height), cmykModel.isAlphaPremultiplied(), null);
WritableRaster cmykRaster = cmykImg.getRaster();
int R,G,B,pixel;
float Rc,Gc,Bc,K,C,M,Y;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
pixel = img.getRGB(x, y);
// Now, as cmykImg already is in CMYK color space, you could actually just invoke
//cmykImg.setRGB(x, y, pixel);
// and the method would perform automatic conversion to the dest color space (CMYK)
// But, here you go... (I just cleaned up your code a little bit):
R = (pixel >> 16) & 0xff;
G = (pixel >> 8) & 0xff;
B = (pixel) & 0xff;
Rc = R / 255f;
Gc = G / 255f;
Bc = B / 255f;
// Equations I found on the internet to get CMYK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
if (K == 1f) {
// All black (this is where you would get NaN values I think)
C = M = Y = 0;
}
else {
C = (1- Rc - K)/(1-K);
M = (1- Gc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
}
// ...and store the CMYK values (as bytes in 0..255 range) in the raster
cmykRaster.setDataElements(x, y, new byte[] {(byte) (C * 255), (byte) (M * 255), (byte) (Y * 255), (byte) (K * 255)});
}
}
// You should now have a CMYK buffered image
System.out.println("cmykImg: " + cmykImg);
}
// A simple and not very accurate CMYK color space
// Full source at https://github.com/haraldk/TwelveMonkeys/blob/master/imageio/imageio-core/src/main/java/com/twelvemonkeys/imageio/color/CMYKColorSpace.java
final static class CMYKColorSpace extends ColorSpace {
static final ColorSpace INSTANCE = new CMYKColorSpace();
final ColorSpace sRGB = getInstance(CS_sRGB);
private CMYKColorSpace() {
super(ColorSpace.TYPE_CMYK, 4);
}
public static ColorSpace getInstance() {
return INSTANCE;
}
public float[] toRGB(float[] colorvalue) {
return new float[]{
(1 - colorvalue[0]) * (1 - colorvalue[3]),
(1 - colorvalue[1]) * (1 - colorvalue[3]),
(1 - colorvalue[2]) * (1 - colorvalue[3])
};
}
public float[] fromRGB(float[] rgbvalue) {
// NOTE: This is essentially the same equation you use, except
// this is slightly optimized, and values are already in range [0..1]
// Compute CMY
float c = 1 - rgbvalue[0];
float m = 1 - rgbvalue[1];
float y = 1 - rgbvalue[2];
// Find K
float k = Math.min(c, Math.min(m, y));
// Convert to CMYK values
return new float[]{(c - k), (m - k), (y - k), k};
}
public float[] toCIEXYZ(float[] colorvalue) {
return sRGB.toCIEXYZ(toRGB(colorvalue));
}
public float[] fromCIEXYZ(float[] colorvalue) {
return sRGB.fromCIEXYZ(fromRGB(colorvalue));
}
}
PS: Your question talks about RGBA and CMYK, but your code just ignores the alpha value, so I did the same. If you really wanted to, you could just keep the alpha value as-is and have a CMYK+A image, to allow alpha-compositing in CMYK color space. I'll leave that as an exercise. ;-)
So here on wikipedia you can see an article describing how summed area table (integral image) works. It's a very important part of computer vision and image analysis.
I'm trying to implement it. The concept is really simple:
Make an array[imageheight][imagewidth]
Every array member should contain sum of all pixels before and above in the original image
To get sum on any rectangle, use A-B-C+D formula, where ABCD is this rectangle:
So I made this function to sum all pixels on BufferedImage:
public static double[][] integralImageGrayscale(BufferedImage image) {
//Cache width and height in variables
int w = image.getWidth();
int h = image.getHeight();
//Create the 2D array as large as the image is
//Notice that I use [Y, X] coordinates to comply with the formula
double integral_image[][] = new double[h][w];
//Sum to be assigned to the pixels
double the_sum = 0;
//Well... the loop
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
//Get pixel. It's actually 0xAARRGGBB, so the function should be getARGB
int pixel = image.getRGB(x, y);
//Extrapolate color values from the integer
the_sum+= ((pixel&0x00FF0000)>>16)+((pixel&0x0000FF00)>>8)+(pixel&0x000000FF);
integral_image[y][x] = the_sum;
}
}
//Return the array
return integral_image;
}
I also made a debug function and it's convincing me that it works:
Notice how the white areas influence the sum of the image
But if I make this test case:
//Summed area table (thing is BufferedImage)
double is[][] = ScreenWatcher.integralImageGrayscale(thing);
//Sum generated by a normal for loop
double ss = ScreenWatcher.grayscaleSum(thing);
//Height of the resulting array
int ish = is.length;
//Width of resulting array. Also throws nasty error if something goes wrong
int isw = is[is.length-1].length;
//Testing whether different methods give same results
System.out.println(
ss +" =? " +
//Last "pixel" in integral image must contain the sum of the image
is[ish-1][isw-1]+" =? "+
//The "sum over rectangle" with a rectangle that contains whole image
// A B C D
(+is[0][0] -is[0][isw-1] -is[ish-1][0] +is[ish-1][isw-1])
);
I get a sad result:
1.7471835E7 =? 1.7471835E7 =? 112455.0
Interesting thing is, that pure white image returns 0:
7650000.0 =? 7650000.0 =? 0.0 - this was 100x100 white image and 765 is 3*255 so everything seems right
I have no idea how to get to the bottom of this. Everything seems too clear to contain a mistake. So either there's a typo in the code above, or the logic is wrong. Any ideas?
Your problem is here:
//Extrapolate color values from the integer
the_sum+= ((pixel&0x00FF0000)>>16)+((pixel&0x0000FF00)>>8)+(pixel&0x000000FF);
integral_image[y][x] = the_sum;
What you should be doing is:
int A = (x > 0 && y > 0) ? integral_image[y-1][x-1] : 0;
int B = (x > 0) ? integral_image[y][x-1] : 0;
int C = (y > 0) ? integral_image[y-1][x] : 0;
integral_image[y][x] = - A + B + C
+ ((pixel&0x00FF0000)>>16)+((pixel&0x0000FF00)>>8)+(pixel&0x000000FF);
(with no the_sum variable).
Evaluating the sum for the portion of the image (minx, miny) -> (maxx, maxy) inclusively can now be done in constant time using the values in integral_image:
double A = (minx > 0 && miny > 0) ? integral_image[miny-1][minx-1] : 0;
double B = (minx > 0) ? integral_image[maxy][minx-1] : 0;
double C = (miny > 0) ? integral_image[miny-1][maxx] : 0;
double D = integral_image[maxy][maxx];
double sum = A - B - C + D;
Note that minx-1 and miny-1 are used because of the inclusivity on the minimum coordinates.
For some reason, I can change a buffered image by using setRGB but not by using the actual int array in the raster:
This works
BufferedImage img = new BufferedImage(32, 32, BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < 32; y++) {
for (int x = 0; x < 32; x++) {
int gray = (int) (MathUtil.noise(x, y) * 255); //I have tested the noise function, and know it works fine
img.setRGB(x, y, gray << 16 | gray << 8 | gray);
}
}
This does not
BufferedImage img = new BufferedImage(32, 32, BufferedImage.TYPE_INT_RGB);
int[] data = ((DataBufferInt) img.getData().getDataBuffer()).getData();
for (int y = 0; y < 32; y++) {
for (int x = 0; x < 32; x++) {
int gray = (int) (MathUtil.noise(x, y) * 255); //I have tested the noise function, and know it works fine
data[x + y * 32] = gray << 16 | gray << 8 | gray;
}
}
Noise function:
public static float noise(int x, int y) {
int n = x + y * 57;
n = (n << 13) ^ n;
return Math.abs((1.0f - ((n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0f));
}
EDIT
Nevermind I fixed it. I needed to use getRaster :P
Because when you call BufferedImage.getData() it is returning a copy, not the actual backing array. So any changes you make directly to that array will not be reflected in the image.
From the JavaDoc for BufferedImage.getData():
Returns:
a Raster that is a copy of the image data.
Edit What's interesting is what it says for the same method in the Java 6 JavaDoc, it's more explicit about the copy's effects. I wonder why they changed it?
Returns the image as one large tile. The Raster returned is a copy of the image data is not updated if the image is changed
Could the answer be as simple as the changes in the data array not being reflected in img object?
how to implements low pass filter, i have:
BufferedImage img;
int width = img.getWidth();
int height = img.getHeight();
int L = (int) (f * Math.min(width, height));
for (int y = 0 ; y < height ; y++) {
for (int x = 0 ; x < width ; x++) {
if (x >= width / 2 - L && x <= width / 2 + L && y >= -L + height / 2 && y <= L + height / 2) {
img.setRGB(x, y, 0);
}
else {}
}
}
but firstly i should transform image but how?
Your code as written would just set the pixels near the edge of the image to black. If you did this in the frequency domain you would have a low-pass filter, because the pixels near the edge of the image would be the high-frequency components and so setting them to 0 would leave just the low-frequency components. To operate in the frequency domain you need to apply a Fourier transform. However you need to take care about where in the transformed image the low-frequency components end up, as different implementations of a Fourier transform may put the low-frequency components either in the center of the transformed image or at one of the corners.
I going to set the Pixel to my Bitmap to some specific point.
For that i am using the For Loop. But as because it is scanning whole image, it takes time.
So what is the alternate of it that can help me to execute it faster.
That for loop is as below:
public void drawLoop(){
int ANTILAISING_TOLERANCE = 100;
for(int x = 0; x < mask.getWidth(); x++){
for(int y = 0; y < mask.getHeight(); y++){
g = (mask.getPixel(x,y) & 0x0000FF00) >> 8;
r = (mask.getPixel(x,y) & 0x00FF0000) >> 16;
b = (mask.getPixel(x,y) & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
}
Please help me for that.
Thanks.
I also had this problem.
My program check every pixel on the bitmap, then checks if the green color (RGB) is higher then red and blue, an bitmap with the size of 3264 x 2448 (Samsung galaxy s2 camera size).
it takes 3 seconds to scan and check the whole bitmap, pretty fast if you ask me.
This is my code:
try {
decoder_image = BitmapRegionDecoder.newInstance("yourfilepath",false);
} catch (IOException e) {
e.printStackTrace();
}
example filepath: /mnt/sdcard/DCIM/Camera/image.jpg
try {
final int width = decoder_image.getWidth();
final int height = decoder_image.getHeight();
// Divide the bitmap into 1100x1100 sized chunks and process it.
// This makes sure that the app will not be "overloaded"
int wSteps = (int) Math.ceil(width / 1100.0);
int hSteps = (int) Math.ceil(height / 1100.0);
Rect rect = new Rect();
for (int h = 0; h < hSteps; h++) {
for (int w = 0; w < wSteps; w++) {
int w2 = Math.min(width, (w + 1) * 1100);
int h2 = Math.min(height, (h + 1) * 1100);
rect.set(w * 1100, h * 1100, w2, h2);
mask = decoder_image.decodeRegion(rect,
null);
try {
int bWidth = mask.getWidth();
int bHeight = mask.getHeight();
int[] pixels = new int[bWidth * bHeight];
mask.getPixels(pixels, 0, bWidth, 0, 0,
bWidth, bHeight);
for (int y = 0; y < bHeight; y++) {
for (int x = 0; x < bWidth; x++) {
int index = y * bWidth + x;
int r = (pixels[index] >> 16) & 0xff; //bitwise shifting
int g = (pixels[index] >> 8) & 0xff;
int b = pixels[index] & 0xff;
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
} finally {
mask.recycle();
}
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
} finally {
decoder_image.recycle();
}
I also cut them into chunks, because samsung galaxy s2 does not have enough memory to scan the whole bitmap at once.
Hope this helped.
Edit:
I just notice (my fault) it was about setting a pixel, instead of only read them. I going to try now to make it fit your code, changed already some to your code, I am working on it at the moment.
Edit 2:
Made an adjustment to the code, I hope this works.
Don't forgot to change "yourfilepath" at the top of the code.
Just a suggestion to reduce the for loop by half. You should try with your images and see if it works.
Idea: By the assumption that the next pixel is same as current pixel, we only analyse the current pixel and apply the result to both current and next pixel.
Drawback: you have 50% chance to have 1 pixel distorted.
Example: Turn color 1 into 3
Original: 1 1 1 1 1 2 2 2 2 2 2 1 1 1
After for loop: 3 3 3 3 3 3 2 2 2 2 2 2 3 3 (Only 7 loops are executed. But color 2 shifted by 1 pixel.)
Using original logic, there will be 14 loops executed.
for(int x = 0; x < mask.getWidth(); x++){
for(int y = 0; y < mask.getHeight() - 1; y+=2) { // Change point 1
g = (mask.getPixel(x,y) & 0x0000FF00) >> 8;
r = (mask.getPixel(x,y) & 0x00FF0000) >> 16;
b = (mask.getPixel(x,y) & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
colored.setPixel(x, y+1, (colored.getPixel(x, y) & 0xFFFF0000)); // Change point 2
}
}
iDroid,
You've got a very tough situation here. Whenever you do pixel by pixel operations, things get a little cumbersome. So, a bunch of minor optimizations are key, and I'm certain that many people will have a lot to add here. I'm not certain how much impact they will have in your overall process, but I know that these general behaviors saveme optimizing a LOT of code.
public void drawLoop(){
int ANTILAISING_TOLERANCE = 100;
//EDIT: Moving this to outside the loop is FAR better
// Saves you an object call and the number doesn't change in the loop anyway.
int maskHeight = mask.getHeight();
//EDIT: Reverse the loops. Comparisons vs. 0 are faster than any other number.
// and saves you a ton of method calls.
for(int x = mask.getWidth(); --x >= 0 ; ){
for(int y = maskHeight; --y >= 0 ; ){
//EDIT: Saves you 2 method calls for the same result.
int atPixel = mask.getPixel(x,y);
g = (atPixel & 0x0000FF00) >> 8;
r = (atPixel & 0x00FF0000) >> 16;
b = (atPixel & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
}
Aside from that, anything else will create "lossy" behavior but will have far higher yields.
Hope this helps,
FuzzicalLogic