For some reason, I can change a buffered image by using setRGB but not by using the actual int array in the raster:
This works
BufferedImage img = new BufferedImage(32, 32, BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < 32; y++) {
for (int x = 0; x < 32; x++) {
int gray = (int) (MathUtil.noise(x, y) * 255); //I have tested the noise function, and know it works fine
img.setRGB(x, y, gray << 16 | gray << 8 | gray);
}
}
This does not
BufferedImage img = new BufferedImage(32, 32, BufferedImage.TYPE_INT_RGB);
int[] data = ((DataBufferInt) img.getData().getDataBuffer()).getData();
for (int y = 0; y < 32; y++) {
for (int x = 0; x < 32; x++) {
int gray = (int) (MathUtil.noise(x, y) * 255); //I have tested the noise function, and know it works fine
data[x + y * 32] = gray << 16 | gray << 8 | gray;
}
}
Noise function:
public static float noise(int x, int y) {
int n = x + y * 57;
n = (n << 13) ^ n;
return Math.abs((1.0f - ((n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0f));
}
EDIT
Nevermind I fixed it. I needed to use getRaster :P
Because when you call BufferedImage.getData() it is returning a copy, not the actual backing array. So any changes you make directly to that array will not be reflected in the image.
From the JavaDoc for BufferedImage.getData():
Returns:
a Raster that is a copy of the image data.
Edit What's interesting is what it says for the same method in the Java 6 JavaDoc, it's more explicit about the copy's effects. I wonder why they changed it?
Returns the image as one large tile. The Raster returned is a copy of the image data is not updated if the image is changed
Could the answer be as simple as the changes in the data array not being reflected in img object?
Related
I want to convert a buffered image from RGBA format to CYMK format without using auto conversion tools or libraries,so i tried to extract the RGBA values from individual pixels that i got using BufferedImage.getRGB() and here what I've done so far :
BufferedImage img = new BufferedImage("image path")
int R,G,B,pixel,A;
float Rc,Gc,Bc,K,C,M,Y;
int height = img.getHeight();
int width = img.getWidth();
for(int y = 0 ; y < height ; y++){
for(int x = 0 ; x < width ; x++){
pixel = img.getRGB(x, y);
//I shifted the int bytes to get RGBA values
A = (pixel>>24)&0xff;
R = (pixel>>16)&0xff;
G = (pixel>>8)&0xff;
B = (pixel)&0xff;
Rc = (float) ((float)R/255.0);
Gc = (float) ((float)G/255.0);
Bc = (float) ((float)B/255.0);
// Equations i found on the internet to get CYMK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
C = (1- Rc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
M = (1- Gc - K)/(1-K);
}
}
Now after I've extracted it ,i want to draw or construct an image using theses values ,can you tell me of a method or a way to do this because i don't thinkBufferedImage.setRGB() would work ,and also when i printed the values of C,Y,M some of them hadNaN value can someone tell me what that means and how to deal with it ?
While it is possible, converting RGB to CMYK without a proper color profile will not produce the best results. For better performance and higher color fidelity, I really recommend using an ICC color profile (see ICC_Profile and ICC_ColorSpace classes) and ColorConvertOp. :-)
Anyway, here's how to do it using your own conversion. The important part is creating a CMYK color space, and a ColorModel and BufferedImage using that color space (you could also load a CMYK color space from an ICC profile as mentioned above, but the colors would probably look more off, as it uses different calculations than you do).
public static void main(String[] args) throws IOException {
BufferedImage img = ImageIO.read(new File(args[0]));
int height = img.getHeight();
int width = img.getWidth();
// Create a color model and image in CMYK color space (see custom class below)
ComponentColorModel cmykModel = new ComponentColorModel(CMYKColorSpace.INSTANCE, false, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage cmykImg = new BufferedImage(cmykModel, cmykModel.createCompatibleWritableRaster(width, height), cmykModel.isAlphaPremultiplied(), null);
WritableRaster cmykRaster = cmykImg.getRaster();
int R,G,B,pixel;
float Rc,Gc,Bc,K,C,M,Y;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
pixel = img.getRGB(x, y);
// Now, as cmykImg already is in CMYK color space, you could actually just invoke
//cmykImg.setRGB(x, y, pixel);
// and the method would perform automatic conversion to the dest color space (CMYK)
// But, here you go... (I just cleaned up your code a little bit):
R = (pixel >> 16) & 0xff;
G = (pixel >> 8) & 0xff;
B = (pixel) & 0xff;
Rc = R / 255f;
Gc = G / 255f;
Bc = B / 255f;
// Equations I found on the internet to get CMYK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
if (K == 1f) {
// All black (this is where you would get NaN values I think)
C = M = Y = 0;
}
else {
C = (1- Rc - K)/(1-K);
M = (1- Gc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
}
// ...and store the CMYK values (as bytes in 0..255 range) in the raster
cmykRaster.setDataElements(x, y, new byte[] {(byte) (C * 255), (byte) (M * 255), (byte) (Y * 255), (byte) (K * 255)});
}
}
// You should now have a CMYK buffered image
System.out.println("cmykImg: " + cmykImg);
}
// A simple and not very accurate CMYK color space
// Full source at https://github.com/haraldk/TwelveMonkeys/blob/master/imageio/imageio-core/src/main/java/com/twelvemonkeys/imageio/color/CMYKColorSpace.java
final static class CMYKColorSpace extends ColorSpace {
static final ColorSpace INSTANCE = new CMYKColorSpace();
final ColorSpace sRGB = getInstance(CS_sRGB);
private CMYKColorSpace() {
super(ColorSpace.TYPE_CMYK, 4);
}
public static ColorSpace getInstance() {
return INSTANCE;
}
public float[] toRGB(float[] colorvalue) {
return new float[]{
(1 - colorvalue[0]) * (1 - colorvalue[3]),
(1 - colorvalue[1]) * (1 - colorvalue[3]),
(1 - colorvalue[2]) * (1 - colorvalue[3])
};
}
public float[] fromRGB(float[] rgbvalue) {
// NOTE: This is essentially the same equation you use, except
// this is slightly optimized, and values are already in range [0..1]
// Compute CMY
float c = 1 - rgbvalue[0];
float m = 1 - rgbvalue[1];
float y = 1 - rgbvalue[2];
// Find K
float k = Math.min(c, Math.min(m, y));
// Convert to CMYK values
return new float[]{(c - k), (m - k), (y - k), k};
}
public float[] toCIEXYZ(float[] colorvalue) {
return sRGB.toCIEXYZ(toRGB(colorvalue));
}
public float[] fromCIEXYZ(float[] colorvalue) {
return sRGB.fromCIEXYZ(fromRGB(colorvalue));
}
}
PS: Your question talks about RGBA and CMYK, but your code just ignores the alpha value, so I did the same. If you really wanted to, you could just keep the alpha value as-is and have a CMYK+A image, to allow alpha-compositing in CMYK color space. I'll leave that as an exercise. ;-)
I'm currently having an issue with alpha channels when reading PNG files with ImageIO.read(...)
fileInputStream = new FileInputStream(path);
BufferedImage image = ImageIO.read(fileInputStream);
//Just copying data into an integer array
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, width, height, pixels, 0, width);
However, when trying to read values from the pixel array by bit shifting as seen below, the alpha channel is always returning -1
int a = (pixels[i] & 0xff000000) >> 24;
int r = (pixels[i] & 0xff0000) >> 16;
int g = (pixels[i] & 0xff00) >> 8;
int b = (pixels[i] & 0xff);
//a = -1, the other channels are fine
By Googling the problem I understand that the BufferedImage type needs to be defined as below to allow for the alpha channel to work:
BufferedImage image = new BufferedImage(width, height BufferedImage.TYPE_INT_ARGB);
But ImageIO.read(...) returns a BufferedImage without giving the option to specify the image type. So how can I do this?
Any help is much appreciated.
Thanks in advance
I think, your "int unpacking" code might be wrong.
I used (pixel >> 24) & 0xff (where pixel is the rgba value of a specific pixel) and it worked fine.
I compared this with the results of java.awt.Color and they worked fine.
I "stole" the "extraction" code directly from java.awt.Color, this is, yet another reason, I tend not to perform these operations this way, it's to easy to screw them up
And my awesome test code...
BufferedImage image = ImageIO.read(new File("BYO image"));
int width = image.getWidth();
int height = image.getHeight();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixel = image.getRGB(x, y);
//value = 0xff000000 | rgba;
int a = (pixel >> 24) & 0xff;
Color color = new Color(pixel, true);
System.out.println(x + "x" + y + " = " + color.getAlpha() + "; " + a);
}
}
nb: Before some one tells that this is inefficient, I wasn't going for efficiency, I was going for quick to write
You may also want to have a look at How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java?, which I also used to validate my results
I think the problem is that you're using arithmetic shift (>>) instead of logical shift (>>>). Thus 0xff000000 >> 24 becomes 0xffffffff (i.e. -1)
I have these code to reduce image noise:
for (int x = 0; x < bitmap.getWidth(); x++) {
for (int y = 0; y < bitmap.getHeight(); y++) {
// get one pixel color
int pixel = processedBitmap.getPixel(x, y);
// retrieve color of RGB
int R = Color.red(pixel);
int G = Color.green(pixel);
int B = Color.blue(pixel);
// convert into single value
R = G = B = (int) (0.299 * R + 0.587 * G + 0.114 * B);
// convert to black and white + remove noise
if (R > 162 && G > 162 && B > 162)
bitmap.setPixel(x, y, Color.WHITE);
else if (R < 162 && G < 162 && B < 162)
bitmap.setPixel(x, y, Color.BLACK);
}
}
But the time takes very long to generate the outcome. Is there any other way to optimize these code to make it faster?
Don't use getPixel. Get the image data as an array and use math to access the correct pixel. Write the math such that the fewest multiplications possible are used. Same for setPixel.
Don't use Color.red(), Color.green(), etc. Use masking, its more efficient than a function call.
Even better, drop into the NDK and do this in C. Image manipulation in Java is generally less than optimal.
There is this image comparison code I am supposed to modify to highlight/point out the difference between two images. Is there a way to modify this code so as to highlight the differences in images. If not any suggestion on how to go about it would be greatly appreciated.
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
return (p * 100.0);
This solution did the trick for me. It highlights differences, and has the best performance out of the methods I've tried. (Assumptions: images are the same size. This method hasn't been tested with transparencies.)
Average time to compare a 1600x860 PNG image 50 times (on same machine):
JDK7 ~178 milliseconds
JDK8 ~139 milliseconds
Does anyone have a better/faster solution?
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
// convert images to pixel arrays...
final int w = img1.getWidth(),
h = img1.getHeight(),
highlight = Color.MAGENTA.getRGB();
final int[] p1 = img1.getRGB(0, 0, w, h, null, 0, w);
final int[] p2 = img2.getRGB(0, 0, w, h, null, 0, w);
// compare img1 to img2, pixel by pixel. If different, highlight img1's pixel...
for (int i = 0; i < p1.length; i++) {
if (p1[i] != p2[i]) {
p1[i] = highlight;
}
}
// save img1's pixels to a new BufferedImage, and return it...
// (May require TYPE_INT_ARGB)
final BufferedImage out = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
out.setRGB(0, 0, w, h, p1, 0, w);
return out;
}
Usage:
import javax.imageio.ImageIO;
import java.io.File;
ImageIO.write(
getDifferenceImage(
ImageIO.read(new File("a.png")),
ImageIO.read(new File("b.png"))),
"png",
new File("output.png"));
Some inspiration...
What I would do is set each pixel to be the difference between one pixel in one image and the corresponding pixel in the other image. The difference that is being calculated in your original code is based on the L1 norm. This is also called the sum of absolute differences too. In any case, write a method that would take in your two images, and return an image of the same size that sets each location to be the difference for each pair of pixels that share the same location in the final image. Basically, this will give you an indication as to which pixels are different. The whiter the pixel, the more difference there is between these two corresponding locations.
I'm also going to assume you're using a BufferedImage class, as getRGB() methods are used and you are bit-shifting to access individual channels. In other words, make a method that looks like this:
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
int width1 = img1.getWidth(); // Change - getWidth() and getHeight() for BufferedImage
int width2 = img2.getWidth(); // take no arguments
int height1 = img1.getHeight();
int height2 = img2.getHeight();
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
// NEW - Create output Buffered image of type RGB
BufferedImage outImg = new BufferedImage(width1, height1, BufferedImage.TYPE_INT_RGB);
// Modified - Changed to int as pixels are ints
int diff;
int result; // Stores output pixel
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2) & 0xff;
diff = Math.abs(r1 - r2); // Change
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
diff /= 3; // Change - Ensure result is between 0 - 255
// Make the difference image gray scale
// The RGB components are all the same
result = (diff << 16) | (diff << 8) | diff;
outImg.setRGB(j, i, result); // Set result
}
}
// Now return
return outImg;
}
To call this method, simply do:
outImg = getDifferenceImage(img1, img2);
This is assuming that you are calling this within a method of your class. Have fun and good luck!
Just to note that the answer from #NickGrealy can be made 10 times faster if you don't need to keep the first image and modify it in place.
Example:
// img1 will be updated with the changes from img2
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
byte[] magenta = {-1, 0, -1};
byte[] buff1 = ((DataBufferByte) img1.getRaster().getDataBuffer()).getData();
byte[] buff2 = ((DataBufferByte) img2.getRaster().getDataBuffer()).getData();
for (int i = 1; i < buff1.lenght; i += 4) {
if (buff1[i] != buff2[i]) {
System.arraycopy(magenta, 0, buff1, i, 3);
}
}
}
I needed a fast approach to use on potentially lot of images for visual regression checking.
It runs in < 2 ms on my machine, and I am in a case where img1 is already saved on disk so I don't need to play with it, I'm just interested in the differences to be updated in the buffered image and write it to a new location for further inspection.
I going to set the Pixel to my Bitmap to some specific point.
For that i am using the For Loop. But as because it is scanning whole image, it takes time.
So what is the alternate of it that can help me to execute it faster.
That for loop is as below:
public void drawLoop(){
int ANTILAISING_TOLERANCE = 100;
for(int x = 0; x < mask.getWidth(); x++){
for(int y = 0; y < mask.getHeight(); y++){
g = (mask.getPixel(x,y) & 0x0000FF00) >> 8;
r = (mask.getPixel(x,y) & 0x00FF0000) >> 16;
b = (mask.getPixel(x,y) & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
}
Please help me for that.
Thanks.
I also had this problem.
My program check every pixel on the bitmap, then checks if the green color (RGB) is higher then red and blue, an bitmap with the size of 3264 x 2448 (Samsung galaxy s2 camera size).
it takes 3 seconds to scan and check the whole bitmap, pretty fast if you ask me.
This is my code:
try {
decoder_image = BitmapRegionDecoder.newInstance("yourfilepath",false);
} catch (IOException e) {
e.printStackTrace();
}
example filepath: /mnt/sdcard/DCIM/Camera/image.jpg
try {
final int width = decoder_image.getWidth();
final int height = decoder_image.getHeight();
// Divide the bitmap into 1100x1100 sized chunks and process it.
// This makes sure that the app will not be "overloaded"
int wSteps = (int) Math.ceil(width / 1100.0);
int hSteps = (int) Math.ceil(height / 1100.0);
Rect rect = new Rect();
for (int h = 0; h < hSteps; h++) {
for (int w = 0; w < wSteps; w++) {
int w2 = Math.min(width, (w + 1) * 1100);
int h2 = Math.min(height, (h + 1) * 1100);
rect.set(w * 1100, h * 1100, w2, h2);
mask = decoder_image.decodeRegion(rect,
null);
try {
int bWidth = mask.getWidth();
int bHeight = mask.getHeight();
int[] pixels = new int[bWidth * bHeight];
mask.getPixels(pixels, 0, bWidth, 0, 0,
bWidth, bHeight);
for (int y = 0; y < bHeight; y++) {
for (int x = 0; x < bWidth; x++) {
int index = y * bWidth + x;
int r = (pixels[index] >> 16) & 0xff; //bitwise shifting
int g = (pixels[index] >> 8) & 0xff;
int b = pixels[index] & 0xff;
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
} finally {
mask.recycle();
}
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
} finally {
decoder_image.recycle();
}
I also cut them into chunks, because samsung galaxy s2 does not have enough memory to scan the whole bitmap at once.
Hope this helped.
Edit:
I just notice (my fault) it was about setting a pixel, instead of only read them. I going to try now to make it fit your code, changed already some to your code, I am working on it at the moment.
Edit 2:
Made an adjustment to the code, I hope this works.
Don't forgot to change "yourfilepath" at the top of the code.
Just a suggestion to reduce the for loop by half. You should try with your images and see if it works.
Idea: By the assumption that the next pixel is same as current pixel, we only analyse the current pixel and apply the result to both current and next pixel.
Drawback: you have 50% chance to have 1 pixel distorted.
Example: Turn color 1 into 3
Original: 1 1 1 1 1 2 2 2 2 2 2 1 1 1
After for loop: 3 3 3 3 3 3 2 2 2 2 2 2 3 3 (Only 7 loops are executed. But color 2 shifted by 1 pixel.)
Using original logic, there will be 14 loops executed.
for(int x = 0; x < mask.getWidth(); x++){
for(int y = 0; y < mask.getHeight() - 1; y+=2) { // Change point 1
g = (mask.getPixel(x,y) & 0x0000FF00) >> 8;
r = (mask.getPixel(x,y) & 0x00FF0000) >> 16;
b = (mask.getPixel(x,y) & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
colored.setPixel(x, y+1, (colored.getPixel(x, y) & 0xFFFF0000)); // Change point 2
}
}
iDroid,
You've got a very tough situation here. Whenever you do pixel by pixel operations, things get a little cumbersome. So, a bunch of minor optimizations are key, and I'm certain that many people will have a lot to add here. I'm not certain how much impact they will have in your overall process, but I know that these general behaviors saveme optimizing a LOT of code.
public void drawLoop(){
int ANTILAISING_TOLERANCE = 100;
//EDIT: Moving this to outside the loop is FAR better
// Saves you an object call and the number doesn't change in the loop anyway.
int maskHeight = mask.getHeight();
//EDIT: Reverse the loops. Comparisons vs. 0 are faster than any other number.
// and saves you a ton of method calls.
for(int x = mask.getWidth(); --x >= 0 ; ){
for(int y = maskHeight; --y >= 0 ; ){
//EDIT: Saves you 2 method calls for the same result.
int atPixel = mask.getPixel(x,y);
g = (atPixel & 0x0000FF00) >> 8;
r = (atPixel & 0x00FF0000) >> 16;
b = (atPixel & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
}
Aside from that, anything else will create "lossy" behavior but will have far higher yields.
Hope this helps,
FuzzicalLogic