Merge four CMYK images into one RGB Image Java - java

Thanks in advance for any help you could provide to me, and sorry for my bad english.
I know there's a lot of questions about this topic, but I have looked a lot on all the Internet (and StackOverflow too) but I haven't found any answer for this...
I have four images; each one of them is in the TYPE_BYTE_GRAY color model. I have loaded these four images using this code:
int numElems = 4;
BufferedImage[] img = new BufferedImage[numElems];
for(int i=0;i<numElems;i++){
FileInputStream in = new FileInputStream(args[i]);
img[i] = ImageIO.read(in);
in.close();
}
Just ImageIO read... I need to "merge" the four images into one RGB image... Each one of the images is one channel from a CMYK image. All these images have equal dimensions. I have converted the four images to one CMYK image using this code:
for(int j=0;j<img[0].getHeight();j++){
//Read current point color...
for(int k=0;k<numElems;k++){
colPunto[k] = (img[k].getRGB(i, j) & 0xFF);
}
int colorPunto = convertComponentsRGB(colPunto);
//Now, I set the point...
out.setRGB(i, j, colorPunto);
}
}
This function "convertComponentsRGB" is just natural math to convert CMYK color to RGB Color...
function convertComponentsRGB(int[] pointColor){
float cyan = (float)pointColor[0] / (float)255;
float magenta = (float)pointColor[1] / (float)255;
float yellow = (float)pointColor[2] / (float)255;
float black = (float)pointColor[3] / (float)255;
float c = min(1f,cyan * (1f - black) + black); //minimum value
float m = min(1f,magenta * (1f - black) + black); //minimum value
float y = min(1f, yellow * (1f - black) + black); //minimum value
result[0] = Math.round(255f*(1f - c));
result[1] = Math.round(255f*(1f - m));
result[2] = Math.round(255f*(1f - y));
return (result[0]<<16) | (result[1]<<8) | result[2];
}
The problem here is... speed. It takes 12 seconds to process one image, because we've to read each pixel and write each pixel, and I think "getRGB" and "setRGB" functions aren't very quick (or, it's just a best way to achieve this).
¿How can I achieve this? I have reading a lot about ColorModel, filters, but I still don't understand how to achieve this in a better time.

You can use getData and setData to speed up access to the pixels vs. getRGB and setRGB.
There's no need to convert the CMYK to floating point and back, you can work directly with the pixel values:
function convertComponentsRGB(int[] pointColor){
int r = max(0, 255 - (pointColor[0] + pointColor[3]));
int g = max(0, 255 - (pointColor[1] + pointColor[3]));
int b = max(0, 255 - (pointColor[2] + pointColor[3]));
return (r<<16) | (g<<8) | b;
}

Related

Image analysis function to calculate middle gray level (max(z)+min(z)/2 in Java

How do I calculate the middle gray level (max(z)+min(z)/2 over the points where the structuring element is 1 and sets the output pixel to that value?
I just know a little about how to get the RGB value each pixel by using image.getRGB(x,y). I have no idea how to get gray level value each pixel of the image and what is z in the formula and all that?
Please help me with this. Thanks in advance.
I'm going to assume that z are the pixels within your structuring element. I'm also going to assume that "structuring element" is in the case of morphology. Here are a few pointers before we start:
You can convert a colour pixel to its graylevel intensity by using the Luminance formula. By consulting the SMPTE Rec. 709 standard, the output graylevel intensity, given the RGB components is: Y = 0.2126*R + 0.7152*G + 0.0722*B.
We're going to assume that the structuring element is odd. This will allow for the symmetric analysis of the structuring element for each pixel in your image where it is placed
I'm going to assume that your image is already loaded in as a BufferedImage.
Your structuring element will be a 2D array of int.
I'm not going to process those pixels where the structuring element traverses out of bounds to make things easy.
As such, the basic algorithm is this:
For each pixel in our image, place the centre of the structuring element at this location
For each pixel location where the structuring element is 1 that coincides with this position, find the max and minimum graylevel intensity
Set the output image pixel at this location to be (max(z) + min(z)) / 2).
Without further ado:
public BufferedImage calculateMiddleGray(BufferedImage img, int[][] mask)
{
// Declare output image
BufferedImage outImg = new BufferedImage(img.getWidth(),
img.getHeight(), BufferedImage.TYPE_INT_RGB);
// For each pixel in our image...
for (int i = mask.length/2; i < img.getWidth() - mask.length/2; i++) {
for (int j = mask[0].length/2; j < img.getHeight() - mask[0].length/2; j++) {
int maxPix = -1;
int minPix = 256;
// For each pixel in the mask...
for (int x = -mask.length/2; x <= mask.length/2; x++) {
for (int y = -mask[0].length/2; y <= mask[0].length/2; y++) {
//Obtain structuring element pixel
int structPix = mask[y+mask.length/2][x+mask[0].length/2];
// If not 1, continue
if (structPix != 1)
continue;
// Get RGB pixel
int rgb = img.getRGB(i+x, j+y);
// Get red, green and blue channels individually
int redPixel = (rgb >> 16) & 0xFF;
int greenPixel = (rgb >> 8) & 0xFF;
int bluePixel = rgb & 0xFF;
// Convert to grayscale
// Performs SMPTE Rec. 709 lum. conversion using integer logic
int lum = (77*red + 150*green + 29*blue) >> 8;
// Find max and min appropriately
if (lum > maxPix)
maxPix = lum;
if (lum < minPix)
minPix = lum;
}
}
// Set output pixel
// Grayscale image has all of its RGB pixels equal
int outPixel = (maxPix + minPix) / 2;
// Cap output - Ensure we don't go out of bounds
if (outPixel > 255)
outPixel = 255;
if (outPixel < 0)
outPixel = 0;
int finalOut = (outPixel << 16) | (outPixel << 8) | outPixel;
outImg.setRGB(i, j, finalOut);
}
}
}
To call this method, create an image img using any standard method, then create a structuring element mask that is a 2D integer array. After, place this method in your class, then invoke the method by:
BufferedImage outImg = calculateMiddleGray(img, mask);
Also (and of course), make sure you import the necessary package for the BufferedImage class, or:
import java.awt.image.BufferedImage;
Note: This is untested code. Hope it works!

[Java-Opencv]: How convert an image from Cartesian space to Polar space?

Hi guys I have to convert this image:
in this:
in Java.
This is my code:
double Cx =original_img.width()/2;
double Cy =original_img.height()/2;
int rho,theta;
for (int i=0;i<img.getHeight();i++){
for(int j=0;j<img.getWidth();j++){
rho = (int)(Math.sqrt(Math.pow(i-Cx,2) + Math.pow(j-Cy,2)));
theta = (int)(Math.atan2((j-Cy),(i-Cx)));
int color;
try{
color = img.getRGB((int)rho, (int)theta);
}catch(Exception e){
color = 0;
}
int alpha = (color>>24) & 0xff;
int red = (color & 0x00ff0000) >> 16;
int green = (color & 0x0000ff00) >> 8;
int blue = color & 0x000000ff;
int pixel = (alpha << 24) | (red << 16) | (green << 8) | blue;
img2.setRGB(rho, theta, pixel);
System.out.println("point: "+rho+" "+theta);
}
}
What's wrong?
I haven't found a simple and good Log-Polar transform in java.
My steps are:
1) take an original image (original_img)
2) cycling on the rows and cols of image
3) calculate rho and theta (are the new X and Y coordinates for the new pixel, right?)
4) get color pixel at coords (rho,theta)
5) create new pixel and set at the new coords.
What miss or wrong?
Thank you.
Now I get it. You want to apply to pixel coordinates. Sorry.
rho = (int)(Math.sqrt(Math.pow(i-Cx,2) + Math.pow(j-Cy,2)));
theta = (int)(Math.atan2((j-Cy),(i-Cx)));
Why would you want int instead of double on the above code? If not required I would suggest use double. Also the code is wrong, you are subtracting the dimension each time. Do not do this:
rho = Math.sqrt(Math.pow(i,2) + Math.pow(j,2));
theta = Math.atan2((j),(i));
That looks fine to me. But why you want to convert to polar anyway?
P.S. The above code has noting to do with Opencv of course.
Edit: If I am interpreting correctly the algorithm you the Cartesian coordinates should be in the center of the image so use your code:
I cannot tell you about the rotation part though but from your statement get color pixel at coords (rho,theta) I am guessing that you don't have to rotate the image. The effect does not require this.

Cropping image lowers quality and border looks bad

Using some math, i created the following java-function, to input a Bitmap, and have it crop out a centered square in which a circle is cropped out again with a black border around it.
The rest of the square should be transparent.
Additionatly, there is a transparent distance to the sides to not damage the preview when sending the image via Messengers.
The code of my function is as following:
public static Bitmap edit_image(Bitmap src,boolean makeborder) {
int width = src.getWidth();
int height = src.getHeight();
int A, R, G, B;
int pixel;
int middlex = width/2;
int middley = height/2;
int seitenlaenge,startx,starty;
if(width>height)
{
seitenlaenge=height;
starty=0;
startx = middlex - (seitenlaenge/2);
}
else
{
seitenlaenge=width;
startx=0;
starty = middley - (seitenlaenge/2);
}
int kreisradius = seitenlaenge/2;
int mittx = startx + kreisradius;
int mitty = starty + kreisradius;
int border=2;
int seitenabstand=55;
Bitmap bmOut = Bitmap.createBitmap(seitenlaenge+seitenabstand, seitenlaenge+seitenabstand, Bitmap.Config.ARGB_8888);
bmOut.setHasAlpha(true);
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
int distzumitte = (int) (Math.pow(mittx-x,2) + Math.pow(mitty-y,2)); // (Xm-Xp)^2 + (Ym-Yp)^2 = dist^2
distzumitte = (int) Math.sqrt(distzumitte);
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
R = (int)Color.red(pixel);
G = (int)Color.green(pixel);
B = (int)Color.blue(pixel);
int color = Color.argb(A, R, G, B);
int afterx=x-startx+(seitenabstand/2);
int aftery=y-starty+(seitenabstand/2);
if(x < startx || y < starty || afterx>=seitenlaenge+seitenabstand || aftery>=seitenlaenge+seitenabstand) //seitenrand
{
continue;
}
else if(distzumitte > kreisradius)
{
color=0x00FFFFFF;
}
else if(distzumitte > kreisradius-border && makeborder) //border
{
color = Color.argb(A, 0, 0, 0);
}
bmOut.setPixel(afterx, aftery, color);
}
}
return bmOut;
}
This function works fine, but there are some problems occuring that i wasn't able to resolve yet.
The quality of the image is decreased significantly
The border is not really round, but appears to be flat at the edges of the image (on some devices?!)
I'd appreciate any help regarding that problems. I got to admit that i'm not the best in math and there should probably be a better formula to ceate the border.
your source code is hard to read, since it is a mix of German and English in the variable names. Additionally you don't say which image library you use, so we don't exactly know where the classes Bitmap and Color come from.
Anyway, it is very obvious, that you are operating only on a Bitmap. Bitmap means the whole image is stored in the RAM pixel by pixel. There is no lossy compression. I don't see anything in your source code, that can affect the quality of the image.
It is very likely, that the answer is in the Code that you don't show us. Additionally, what you describe (botrh of the problems) sounds like a very typical low quality JPEG compression. I am sure, somewhere after you call you function, you convert/save the image to a JPEG. Try to do that at that position to BMP, TIFF or PNG and see that the error disappears magically. Maybe you can also set the quality level of the JPEG somewhere to avoid that.
To make it easier for others (maybe) also to find a good answer, please allow me to translate your code to English:
public static Bitmap edit_image(Bitmap src,boolean makeborder) {
int width = src.getWidth();
int height = src.getHeight();
int A, R, G, B;
int pixel;
int middlex = width/2;
int middley = height/2;
int sideLength,startx,starty;
if(width>height)
{
sideLength=height;
starty=0;
startx = middlex - (sideLength/2);
}
else
{
sideLength=width;
startx=0;
starty = middley - (sideLength/2);
}
int circleRadius = sideLength/2;
int middleX = startx + circleRadius;
int middleY = starty + circleRadius;
int border=2;
int sideDistance=55;
Bitmap bmOut = Bitmap.createBitmap(sideLength+sideDistance, sideLength+sideDistance, Bitmap.Config.ARGB_8888);
bmOut.setHasAlpha(true);
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
int distanceToMiddle = (int) (Math.pow(middleX-x,2) + Math.pow(middleY-y,2)); // (Xm-Xp)^2 + (Ym-Yp)^2 = dist^2
distanceToMiddle = (int) Math.sqrt(distanceToMiddle);
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
R = (int)Color.red(pixel);
G = (int)Color.green(pixel);
B = (int)Color.blue(pixel);
int color = Color.argb(A, R, G, B);
int afterx=x-startx+(sideDistance/2);
int aftery=y-starty+(sideDistance/2);
if(x < startx || y < starty || afterx>=sideLength+sideDistance || aftery>=sideLength+sideDistance) //margin
{
continue;
}
else if(distanceToMiddle > circleRadius)
{
color=0x00FFFFFF;
}
else if(distanceToMiddle > circleRadius-border && makeborder) //border
{
color = Color.argb(A, 0, 0, 0);
}
bmOut.setPixel(afterx, aftery, color);
}
}
return bmOut;
}
I think that you need to check PorterDuffXferMode.
You will find some technical informations about compositing images modes HERE.
There is some good example of making bitmap with rounded edges HERE. You just need to tweak a bit source code and you're ready to go...
Hope it will help.
Regarding the quality I can't see anything wrong with your method. Running the code with Java Swing no quality is lost. The only problem is that the image has aliased edges.
The aliasing problem will tend to disappear as the screen resolution increases and would be more noticeable for lower resolutions. This might explain why you see it in some devices only.The same problem applies to your border but in that case it would be more noticable since the color is single black.
Your algorithm defines a square area of the original image. To find the square it starts from the image's center and expand to either the width or the height of the image whichever is smaller. I am referring to this area as the square.
The aliasing is caused by your code that sets the colors (I am using pseudo-code):
if ( outOfSquare() ) {
continue; // case 1: this works but you depend upon the new image' s default pixel value i.e. transparent black
} else if ( insideSquare() && ! insideCircle() ) {
color = 0x00FFFFFF; // case 2: transparent white. <- Redundant
} else if ( insideBorder() ) {
color = Color.argb(A, 0, 0, 0); // case 3: Black color using the transparency of the original image.
} else { // inside the inner circle
// case 4: leave image color
}
Some notes about the code:
Case 1 depends upon the default pixel value of the original image i.e. transparent black. It works but better to set it explicitly
Case 2 is redundant. Handle it in the same way you handle case 1. We are only interested in what happens inside the circle.
Case 3 (when you draw the border) is not clear what it expects. Using the alpha of the original image has the potential of messing up your new image if it happens that the original alpha varies along the circle's edges. So this is clearly wrong and depending on the image, can potentially be another cause of your problems.
Case 4 is ok.
Now at your circle's periphery the following color transitions take place:
If border is not used: full transparency -> full image color (case 2 and 4 in the pseudocode)
If border is used: full transparency -> full black -> full image color (cases 2, 3 and 4)
To achieve a better quality at the edges you need to introduce some intermediate states that would make the transitions smoother (the new transitions are shown in italics):
Border is not used: full transparency -> partial transparency with image color -> full image color
Border is used: full transparency -> partial transparency of Black color -> full Black color -> partial transparency of Black color + Image color (i.e. blending) -> Full image color
I hope that helps

How to convert pixels to gray scale?

Ok, I am using Processing which allows me to access pixels of any image as int[]. What I now want to do is to convert the image to gray-scale. Each pixel has a structure as shown below:
...........PIXEL............
[red | green | blue | alpha]
<-8--><--8---><--8--><--8-->
Now, what transformations do I need to apply to individual RGB values to make the image gray-scale ??
What I mean is, how much do I add / subtract to make the image gray-scale ?
Update
I found a few methods here: http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
For each pixel, the value for the red, green and blue channels should be their averages. Like this:
int red = pixel.R;
int green = pixel.G;
int blue = pixel.B;
pixel.R = pixel.G = pixel.B = (red + green + blue) / 3;
Since in your case the pixel colors seem to be stored in an array rather than in properties, your code could end up looking like:
int red = pixel[0];
int green = pixel[1];
int blue = pixel[2];
pixel[0] = pixel[1] = pixel[2] = (red + green + blue) / 3;
The general idea is that when you have a gray scale image, each pixel's color measures only the intensity of light at that point - and the way we perceive that is the average of the intensity for each color channel.
The following code loads an image and cycle through its pixels, changing the saturation to zero and keeping the same hue and brightness values.
PImage img;
void setup () {
colorMode(HSB, 100);
img = loadImage ("img.png");
size(img.width,img.height);
color sat = color (0,0,0);
img.loadPixels();
for (int i = 0; i < width * height; i++) {
img.pixels[i]=color (hue(img.pixels[i]), sat, brightness(img.pixels[i]));
}
img.updatePixels();
image(img,0,0);
}

More Efficient RGB to ARGB Conversion

I have this working code which reads in a 700x700 RGB24 TIF file and places it into display memory. The line which assigns the pixelARGB value appears to be extremely inefficient, this code takes 3-4 seconds to redraw the screen. Is there a way I can avoid the shifting and oring and just place the byte values into the correct position within the 32 bit word?
In other languages I have done this with "overlayed variables" or "variant records" or such. Cannot find this in Java. Thank you.
for (y=0; y<700; y++) { // for each line
i = 0;
for (x=0; x<700; x++) { // for each dot
red = lineBuf[i++] & 0xFF;
green = lineBuf[i++] & 0xFF;
blue = lineBuf[i++]& 0xFF;
pixelARGB = 0xFF000000 | (red << 16)| (green << 8) | blue;
this_g.setPixel(x + BORDER, y + BORDER, pixelARGB);
}
size=is.read(lineBuf,0,2100);
}
There is at least one way to convert your TIFF image data buffer into a Bitmap more efficiently, and there is an optimization that can possibly be made.
1. Use an int[] array instead of pixel copies:
You still have to calculate each pixel individually, but set them in an int[] array.
It is the setPixel() function that is taking all your time.
Example:
final int w = 700;
final int h = 700;
final int n = w * h;
final int [] buf = new int[n];
for (int y = 0; y < h; y++) {
final int yw = y * w;
for (int x = 0; x < w; x++) {
int i = yw + x;
// Calculate 'pixelARGB' here.
buf[i] = pixelARGB;
}
}
Bitmap result = Bitmap.createBitmap(buf, w, h, Bitmap.Config.ARGB_8888);
2. Resize within your Loop:
This is not very likely, but in case your destination ImageView for the result image is known to be smaller than the source image, which is 700x700 in your question, then you can resize within your for loop for an extremely high performance increase.
What you have to do is loop through your destination image pixels, calculate the pixel x, y values you need from your source image, calculate the pixelARGB value for only those pixels, populate a smaller int[] array, and finally generate a smaller Bitmap. Much. Faster.
You can even enhance the resize quality with a homebrew cubic interpolation of the four nearest source pixels for each destination pixel, but I think you will find this unnecessary for display purposes.

Categories

Resources