[Java-Opencv]: How convert an image from Cartesian space to Polar space? - java

Hi guys I have to convert this image:
in this:
in Java.
This is my code:
double Cx =original_img.width()/2;
double Cy =original_img.height()/2;
int rho,theta;
for (int i=0;i<img.getHeight();i++){
for(int j=0;j<img.getWidth();j++){
rho = (int)(Math.sqrt(Math.pow(i-Cx,2) + Math.pow(j-Cy,2)));
theta = (int)(Math.atan2((j-Cy),(i-Cx)));
int color;
try{
color = img.getRGB((int)rho, (int)theta);
}catch(Exception e){
color = 0;
}
int alpha = (color>>24) & 0xff;
int red = (color & 0x00ff0000) >> 16;
int green = (color & 0x0000ff00) >> 8;
int blue = color & 0x000000ff;
int pixel = (alpha << 24) | (red << 16) | (green << 8) | blue;
img2.setRGB(rho, theta, pixel);
System.out.println("point: "+rho+" "+theta);
}
}
What's wrong?
I haven't found a simple and good Log-Polar transform in java.
My steps are:
1) take an original image (original_img)
2) cycling on the rows and cols of image
3) calculate rho and theta (are the new X and Y coordinates for the new pixel, right?)
4) get color pixel at coords (rho,theta)
5) create new pixel and set at the new coords.
What miss or wrong?
Thank you.

Now I get it. You want to apply to pixel coordinates. Sorry.
rho = (int)(Math.sqrt(Math.pow(i-Cx,2) + Math.pow(j-Cy,2)));
theta = (int)(Math.atan2((j-Cy),(i-Cx)));
Why would you want int instead of double on the above code? If not required I would suggest use double. Also the code is wrong, you are subtracting the dimension each time. Do not do this:
rho = Math.sqrt(Math.pow(i,2) + Math.pow(j,2));
theta = Math.atan2((j),(i));
That looks fine to me. But why you want to convert to polar anyway?
P.S. The above code has noting to do with Opencv of course.
Edit: If I am interpreting correctly the algorithm you the Cartesian coordinates should be in the center of the image so use your code:
I cannot tell you about the rotation part though but from your statement get color pixel at coords (rho,theta) I am guessing that you don't have to rotate the image. The effect does not require this.

Related

Handling borders when applying a filter to an image

I'm trying to tidy up some of my github projects for a portfolio and was hoping for some help.
I have a basic image convolution kernel program in java to apply a filter to an input image.
The kernels are 3x3 and 5x5 2D arrays and are applied across each pixel in the image. Obviously this causes issues around the edges where 3-5 cells in the kernels will be multiplied against empty values (causing the outer pixels in the image to be black or white)
Below are some examples of what I mean (the white pixel bar, not the grey part)
The following is the crux of the code involving the image processing.
for (int xCoord = 0; xCoord < (width); xCoord++) { // -2
for (int yCoord = 0; yCoord < (height); yCoord++) {
// Output Red, Green and Blue Values
int outR, outG, outB, outA;
// Temp red, green, blue values that will contain the total r/g/b values after
// kernel multiplication
double red = 0, green = 0, blue = 0, alpha = 0;
int outRGB = 0;
/*
* Loop over the Kernel (For each cell in kernel)
The offset is added to the xCoord and yCoord to follow the footprint of the
kernel
The logic behind below is that all kernels are uneven numbers (so they can
have a centre pixel, 3x3 5x5 etc),
the offset needs to run from negative half their length (rounded down) to the
positive of half their length (rounded down)
This allows us to input kernels of various sizes (5x5, 9x9 etc) and the
calculation should not be thrown off
*/
try {
for (int xOffset = Math.negateExact(k.getKernels().length / 2); xOffset <= k.getKernels().length
/ 2; xOffset++) {
for (int yOffset = Math.negateExact(k.getKernels().length / 2); yOffset <= k.getKernels().length
/ 2; yOffset++) {
The first 2 loops iterate over each pixel. The second 2 loops are looping over the kernel (A 2D array enum class) The offset is used to loop across the 3x3 or 5x5 kernel at each pixel in the image.
Follows is my very basic attempt to wrap the pixels sampled so edge pixels will use values at the opposite side of the image but my main question is how would be best to handle these edge cases. If anyone has any pointers of where to start solving this it would be much appreciated.
// TODO //very basic wrapping logic
int realX = (xCoord - k.getKernels().length / 2 + xOffset + width) % width;
int realY = (yCoord - k.getKernels().length / 2 + yOffset + height) % height;
int RGB = image.getRGB((realX), (realY)); // The RGB value for the pixel, will be split out
// below
int A = (RGB >> 24) & 0xFF; // Bitshift 24 to get alpha value
int R = (RGB >> 16) & 0xFF; // Bitshift 16 to get Red Value
int G = (RGB >> 8) & 0xFF; // Bit Shift 8 to get Green Value
int B = (RGB) & 0xFF;
// actual rgb * kernel logic
red += (R * (k.getKernels()[yOffset + k.getKernels().length / 2])[xOffset
+ k.getKernels().length / 2] * multiFactor);
green += (G * k.getKernels()[yOffset + k.getKernels().length / 2][xOffset
+ k.getKernels().length / 2] * multiFactor);
blue += (B * k.getKernels()[yOffset + k.getKernels().length / 2][xOffset
+ k.getKernels().length / 2] * multiFactor);
alpha += (A * k.getKernels()[yOffset + k.getKernels().length / 2][xOffset
+ k.getKernels().length / 2] * multiFactor);
}
}
} catch (ArrayIndexOutOfBoundsException e) {
System.out.println("Error");
}
// Logic here prevents pixel going over 255 or under 0
// The "winner"of the max between the Colour value or 0(min pixel value) will be
// Math.min with 255 (the max value for a pixel)
outR = (int) Math.min(Math.max((red + bias), 0), 255);
outG = (int) Math.min(Math.max((green + bias), 0), 255);
outB = (int) Math.min(Math.max((blue + bias), 0), 255);
outA = (int) Math.min(Math.max((alpha + bias), 0), 255);
// Reassembling the separate color channels into one variable again.
outRGB = outRGB | (outA << 24);
outRGB = outRGB | (outR << 16);
outRGB = outRGB | (outG << 8);
outRGB = outRGB | outB;
// Setting with the reassembled RGB value
// output.setRGB(xCoord, yCoord, outRGB);
I'm a bit stumped how to proceed but if anyone can suggest efficient ways to handle these edge cases or even point me in the right direction or even some constructive criticism for how to improve any of the code. If anyone's interested the full project is here

Alpha channel ignored when using ImageIO.read()

I'm currently having an issue with alpha channels when reading PNG files with ImageIO.read(...)
fileInputStream = new FileInputStream(path);
BufferedImage image = ImageIO.read(fileInputStream);
//Just copying data into an integer array
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, width, height, pixels, 0, width);
However, when trying to read values from the pixel array by bit shifting as seen below, the alpha channel is always returning -1
int a = (pixels[i] & 0xff000000) >> 24;
int r = (pixels[i] & 0xff0000) >> 16;
int g = (pixels[i] & 0xff00) >> 8;
int b = (pixels[i] & 0xff);
//a = -1, the other channels are fine
By Googling the problem I understand that the BufferedImage type needs to be defined as below to allow for the alpha channel to work:
BufferedImage image = new BufferedImage(width, height BufferedImage.TYPE_INT_ARGB);
But ImageIO.read(...) returns a BufferedImage without giving the option to specify the image type. So how can I do this?
Any help is much appreciated.
Thanks in advance
I think, your "int unpacking" code might be wrong.
I used (pixel >> 24) & 0xff (where pixel is the rgba value of a specific pixel) and it worked fine.
I compared this with the results of java.awt.Color and they worked fine.
I "stole" the "extraction" code directly from java.awt.Color, this is, yet another reason, I tend not to perform these operations this way, it's to easy to screw them up
And my awesome test code...
BufferedImage image = ImageIO.read(new File("BYO image"));
int width = image.getWidth();
int height = image.getHeight();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixel = image.getRGB(x, y);
//value = 0xff000000 | rgba;
int a = (pixel >> 24) & 0xff;
Color color = new Color(pixel, true);
System.out.println(x + "x" + y + " = " + color.getAlpha() + "; " + a);
}
}
nb: Before some one tells that this is inefficient, I wasn't going for efficiency, I was going for quick to write
You may also want to have a look at How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java?, which I also used to validate my results
I think the problem is that you're using arithmetic shift (>>) instead of logical shift (>>>). Thus 0xff000000 >> 24 becomes 0xffffffff (i.e. -1)

Image analysis function to calculate middle gray level (max(z)+min(z)/2 in Java

How do I calculate the middle gray level (max(z)+min(z)/2 over the points where the structuring element is 1 and sets the output pixel to that value?
I just know a little about how to get the RGB value each pixel by using image.getRGB(x,y). I have no idea how to get gray level value each pixel of the image and what is z in the formula and all that?
Please help me with this. Thanks in advance.
I'm going to assume that z are the pixels within your structuring element. I'm also going to assume that "structuring element" is in the case of morphology. Here are a few pointers before we start:
You can convert a colour pixel to its graylevel intensity by using the Luminance formula. By consulting the SMPTE Rec. 709 standard, the output graylevel intensity, given the RGB components is: Y = 0.2126*R + 0.7152*G + 0.0722*B.
We're going to assume that the structuring element is odd. This will allow for the symmetric analysis of the structuring element for each pixel in your image where it is placed
I'm going to assume that your image is already loaded in as a BufferedImage.
Your structuring element will be a 2D array of int.
I'm not going to process those pixels where the structuring element traverses out of bounds to make things easy.
As such, the basic algorithm is this:
For each pixel in our image, place the centre of the structuring element at this location
For each pixel location where the structuring element is 1 that coincides with this position, find the max and minimum graylevel intensity
Set the output image pixel at this location to be (max(z) + min(z)) / 2).
Without further ado:
public BufferedImage calculateMiddleGray(BufferedImage img, int[][] mask)
{
// Declare output image
BufferedImage outImg = new BufferedImage(img.getWidth(),
img.getHeight(), BufferedImage.TYPE_INT_RGB);
// For each pixel in our image...
for (int i = mask.length/2; i < img.getWidth() - mask.length/2; i++) {
for (int j = mask[0].length/2; j < img.getHeight() - mask[0].length/2; j++) {
int maxPix = -1;
int minPix = 256;
// For each pixel in the mask...
for (int x = -mask.length/2; x <= mask.length/2; x++) {
for (int y = -mask[0].length/2; y <= mask[0].length/2; y++) {
//Obtain structuring element pixel
int structPix = mask[y+mask.length/2][x+mask[0].length/2];
// If not 1, continue
if (structPix != 1)
continue;
// Get RGB pixel
int rgb = img.getRGB(i+x, j+y);
// Get red, green and blue channels individually
int redPixel = (rgb >> 16) & 0xFF;
int greenPixel = (rgb >> 8) & 0xFF;
int bluePixel = rgb & 0xFF;
// Convert to grayscale
// Performs SMPTE Rec. 709 lum. conversion using integer logic
int lum = (77*red + 150*green + 29*blue) >> 8;
// Find max and min appropriately
if (lum > maxPix)
maxPix = lum;
if (lum < minPix)
minPix = lum;
}
}
// Set output pixel
// Grayscale image has all of its RGB pixels equal
int outPixel = (maxPix + minPix) / 2;
// Cap output - Ensure we don't go out of bounds
if (outPixel > 255)
outPixel = 255;
if (outPixel < 0)
outPixel = 0;
int finalOut = (outPixel << 16) | (outPixel << 8) | outPixel;
outImg.setRGB(i, j, finalOut);
}
}
}
To call this method, create an image img using any standard method, then create a structuring element mask that is a 2D integer array. After, place this method in your class, then invoke the method by:
BufferedImage outImg = calculateMiddleGray(img, mask);
Also (and of course), make sure you import the necessary package for the BufferedImage class, or:
import java.awt.image.BufferedImage;
Note: This is untested code. Hope it works!

Merge four CMYK images into one RGB Image Java

Thanks in advance for any help you could provide to me, and sorry for my bad english.
I know there's a lot of questions about this topic, but I have looked a lot on all the Internet (and StackOverflow too) but I haven't found any answer for this...
I have four images; each one of them is in the TYPE_BYTE_GRAY color model. I have loaded these four images using this code:
int numElems = 4;
BufferedImage[] img = new BufferedImage[numElems];
for(int i=0;i<numElems;i++){
FileInputStream in = new FileInputStream(args[i]);
img[i] = ImageIO.read(in);
in.close();
}
Just ImageIO read... I need to "merge" the four images into one RGB image... Each one of the images is one channel from a CMYK image. All these images have equal dimensions. I have converted the four images to one CMYK image using this code:
for(int j=0;j<img[0].getHeight();j++){
//Read current point color...
for(int k=0;k<numElems;k++){
colPunto[k] = (img[k].getRGB(i, j) & 0xFF);
}
int colorPunto = convertComponentsRGB(colPunto);
//Now, I set the point...
out.setRGB(i, j, colorPunto);
}
}
This function "convertComponentsRGB" is just natural math to convert CMYK color to RGB Color...
function convertComponentsRGB(int[] pointColor){
float cyan = (float)pointColor[0] / (float)255;
float magenta = (float)pointColor[1] / (float)255;
float yellow = (float)pointColor[2] / (float)255;
float black = (float)pointColor[3] / (float)255;
float c = min(1f,cyan * (1f - black) + black); //minimum value
float m = min(1f,magenta * (1f - black) + black); //minimum value
float y = min(1f, yellow * (1f - black) + black); //minimum value
result[0] = Math.round(255f*(1f - c));
result[1] = Math.round(255f*(1f - m));
result[2] = Math.round(255f*(1f - y));
return (result[0]<<16) | (result[1]<<8) | result[2];
}
The problem here is... speed. It takes 12 seconds to process one image, because we've to read each pixel and write each pixel, and I think "getRGB" and "setRGB" functions aren't very quick (or, it's just a best way to achieve this).
¿How can I achieve this? I have reading a lot about ColorModel, filters, but I still don't understand how to achieve this in a better time.
You can use getData and setData to speed up access to the pixels vs. getRGB and setRGB.
There's no need to convert the CMYK to floating point and back, you can work directly with the pixel values:
function convertComponentsRGB(int[] pointColor){
int r = max(0, 255 - (pointColor[0] + pointColor[3]));
int g = max(0, 255 - (pointColor[1] + pointColor[3]));
int b = max(0, 255 - (pointColor[2] + pointColor[3]));
return (r<<16) | (g<<8) | b;
}

How to set the particular value to any pixel in java?

Actually I'm working on some image processing project and got struck somewhere. I have to convert the colored image to grey scale and for this i have extracted the values of RED, GREEN, BLUE component of a pixel using GETRGB() n now I want to set the RGB value of that pixel equal to the average of its RGB component. The RGB components are stored in INT variables respectively, so can u help me to set the average of this RGB components to the original pixel value??
The part of the code is :
rgbArray=new int[w*h];
buffer.getRGB(0, 0, width, height, rgbArray , 0,width );
int a,r,g,b;
for(int i = 0 ; i<w*h; i++)
{
r = (0x00ff0000 & rgbArray[i]) >> 16;
g = (0x0000ff00 & rgbArray[i]) >> 8;
b = (0x000000ff & rgbArray[i]);
rgbArray[i] = (r+g+b)/3;
}
buffer.setRGB(0, 0, width, height, rgbArray , 0,width);
but this is not giving me a grey image. Can u tell where i am doing a mistake.
It is not clear what you want to do. If you are trying to produce a gray color I suggest referring to the following page: http://www.tayloredmktg.com/rgb/ which shows rgb codes for different shades of gray.
If you are trying to get a translucent image you have to use the alpha channel (RGBA commands) in java. You can also get translucency by compositing the underlying image with your current image in special ways but that is MUCH harder than using alpha channel.
Your code does not pack the grayscale level back into each color component. Also, as I said in my comment to the question, conversion to grayscale needs to consider the human eye's sensitivity to each color component. A typical formula for obtaining the gray level is
G = 0.30 * R + 0.59 * G + 0.11 * B
as this Wikipedia article states.
So your for loop should look like this:
for(int i = 0 ; i<w*h; i++)
{
a = (0xff000000 & rgbArray[i]);
r = (0x00ff0000 & rgbArray[i]) >> 16;
g = (0x0000ff00 & rgbArray[i]) >> 8;
b = (0x000000ff & rgbArray[i]);
int gray = (int)(0.30 * r + 0.59 * g + 0.11 * b);
if ( gray < 0 ) gray = 0;
if ( gray > 255 ) gray = 255;
rgbArray[i] = a | ( gray << 16 ) | ( gray << 8 ) | gray;
}
You can, of course, declare gray outside of the loop, like you did with r, etc.

Categories

Resources