I'm working on a image manipulation. I have a problem I'm dealing with it, I didn't get any results. I need an algorithm to detect inverted colors.
As in the example photo below, I need to find and fix the inverted colors:
Currently I'm trying to find solution with using Java and C# .
I get the best closest result with this method. I convert the image to invert and two image compare with pixel-by-pixel. 70% success.
public static Color getTrueColor(this Color t, Color m)
{
int[] a = { t.R, t.G, t.B };
int[] b = { m.R, m.G, m.B };
int x = (int)a.Average();
int y = (int)b.Average();
return x < y ? m : t;
}
Thanks in advance for every help and suggestion.
Related
I have a Collection of Integers of Processing colors (it contains colors of images rescaled to 1x1 to get the "average" color).
I have this thing that must retrieve me the nearest color of the array :
public static int getNearestColor(Collection<Integer> colors, int color) {
return colors.stream()
.min(Comparator.comparingInt(i -> Math.abs(i - color)))
.orElseThrow(() -> new NoSuchElementException("No value present"));
}
But when I do this, it returns me a color that is way far than the input, but the array contains some colors that are nearest than the input, this is my problem that I don't understand ?
The RGB color channels of a color() are encoded in an int. You can extract the red, green and blue component of the color, by red(), green() and blue().
Treat the color channels as a 3 dimensional vector (PVector) and compute the Euclidean distance of 2 color vectors, by dist(). The color with the shortest "distance" is the "nearest" color:
In the following function the arguments c1 and c2 are colors of type int:
float ColorDistance(int c1, int c2) {
return PVector.dist(
new PVector(red(c1), green(c1), blue(c1)),
new PVector(red(c2), green(c2), blue(c2)));
}
Find the "nearest" color, in a collection of colors, by finding the minimum floating point "distance" (ColorDistance(i, color)).
arrays in java don't have a stream method; perhaps you meant Arrays.stream(colors). IntStream has no way to do a comparison other than on natural order. You can map to the difference (abs(i - color)) first, but now you've deleted the information you were looking for (the original color), so that won't work either. Let's box it. This results in the following code, which is exactly like your code, except this compiles and runs. I'll then also add a test case to make it a self-contained example:
int[] colors = {1,4,5,9,12};
int target = 6;
int a = Arrays.stream(colors).boxed()
.min(Comparator.comparingInt(i -> Math.abs(i - target)))
.orElseThrow(() -> new NoSuchElementException());
System.out.println(a);
and, whoa nelly, '5' falls out, which is exactly what you want.
In other words, the intent of your code is fine, if it's not giving the right answer your inputs are not what you thought they are, or something else is wrong that cannot be gleaned from your paste.
May I suggest that if it is at all possible to put the question in a simple, self contained form (as this question clearly was, see the code snippet in this answer), that you do so? Often you'll answer your own questions that way :)
public static int getNearestColor(int[] colors, int color) {
int minDiff = IntStream.of(colors)
.map(val -> Math.abs(val - color))
.min()
.getAsInt();
OptionalInt num = IntStream.of(colors)
.filter(val-> val==(color + minDiff))
.findFirst();
if(num.isPresent()){
return color + minDiff;
} else {
return color - minDiff;
}
}
I want to look within a certain position in an image to see if the selected pixels have changed in color, how would I go about doing this? (Im trying to check for movement)
I was thinking I could do something like this:
public int[] rectanglePixels(BufferdImage img, Rectangle Range) {
int[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
int[] boxColors;
for(int y = 0; y < img.getHeight(); y++) {
for(int x = 0; x < img.getWidth; x++) {
boxColors = pixels[(x & Range.width) * Range.x + (y & Range.height) * Range.y * width]
}
}
return boxColors;
}
Maybe use that to extract the colors from the position? Not sure if im doing that right, but after that should I re-run this method, compare the two arrays for similarities? and if the number of similarities reach some threshold declare that the image has changed?
One approach to detect movement is the analysis of pixel color variation considering the entire image or a subimage in distinct times (n, n-1, n-2, ...). In this case you are considering a fixed camera. You might have two thresholds:
The threshold of color channel variation that defines that two pixels are distinct.
The threshold of distinct pixels between the images to consider there is movement. In other words: two images of the same scene at time n and n-1 have just 10 distinct pixels. It is a real movement or just noise?
Below an example showing how to counter the distict pixels in an image, given a color channel threshold.
for(int y=0; y<imageA.getHeight(); y++){
for(int x=0; x<imageA.getWidth(); x++){
redA = imageA.getIntComponent0(x, y);
greenA = imageA.getIntComponent1(x, y);
blueA = imageA.getIntComponent2(x, y);
redB = imageB.getIntComponent0(x, y);
greenB = imageB.getIntComponent1(x, y);
blueB = imageB.getIntComponent2(x, y);
if
(
Math.abs(redA-redB)> colorThreshold ||
Math.abs(greenA-greenB)> colorThreshold||
Math.abs(blueA-blueB)> colorThreshold
)
{
distinctPixels++;
}
}
}
However, there are Marvin plug-ins to do so. Check this source code example. It detects and display regions containing "movements", as shown in the image below.
There are more sophisticated approaches that determine/subtract background for this purpose or deal with camera movements. I guess you should start from the simplest scenario and then go to more complex ones.
You should use BufferedImage.getRGB(startX, startY, w, h, rgbArray, offset, scansize) unless you really want to play around with the loops and extra arrays.
Comparing two values through a threshold would serve as good indicator. Perhaps, you could calculate averages for each array to determine color and compare the two? If you do not want a threshold value just use .hashCode();
For college, we have been given an assignment where, given an image, we have to identify the "figures", their color, and the amount of "pixel-groups" inside them. Let me explain:
The image above has one figure (in the image there can be multiple figures, but let us forget about that for now).
The background color of the canvas is the pixel at 0,0 (in this case, yellow)
The border color of the figure is black (it can be any color other than the canvas' background color).
The figure's background color is white (it can also be the same as the canvas' background color).
A figure can only have one background color.
There are two pixel groups in the figure. One is a pool of blue pixels, and the other is a pool of red with some green inside. As you can see, it doesn't matter the color of the pixel group's pixels (it is just different than the figure's background color). What matters is the fact that they're in contact (even diagonally). So despite having two different colors, such group is considered as just one anyway.
As you can see, the border can be as irregular as you wish. It only has, however, one color.
It is known that a pixel group will not touch the border.
I was told that a pixel group's colors can be any except the figure's background color. I assume that then it can be the same as the figure's border color (black).
We have been given a class capable of taking images and converting them to a matrix (each element being an integer representing the color of the pixel).
And that's it. I'm doing it with Java.
WHAT HAVE I DONE SO FAR
Iterate through each pixel in the matrix
If I find a pixel that is different from the background color, I will assume it belongs to the border of the figure. I will call this pixel initialPixel from now on.
Note that the initialPixel in the image I provided is that black pixel in the top-left corner of the figure. I made a sharp cut there purposefully to illustrate it.
My mission now is to find the background color of the figure (in this case white).
But I'm having quite a great deal of trouble to find such background color (white). This is the closest method I did, which worked for some cases - but not with this image:
Since I know the color of the border, I could find the first different color that is to the south of the initialPixel. Did sound like a good idea - it did work sometimes, but it would not work with the image provided: it will return yellow in this case, since initialPixel is quite away from the figure's contents.
Assuming I did find the figure's background color (white), my next task would be to realize that there exist two pixel groups within the figure. This one seems easier:
Since I now know the figure's background color (white), I can try iterating through each pixel within the figure and, if I find one that does not belong to the border and is not part of the figure's background, I can already tell there is one pixel group. I can begin a recursive function to find all pixels related to such group and "flag" them so that in the future iterations I can completely ignore such pixels.
WHAT I NEED
Yes, my problem is about how to find the figure's background color (keep in mind it can be the same as the whole image's background color - for now it is yellow, but it can be white as well) based on what I described before.
I don't need any code - I'm just having trouble thinking a proper algorithm for such. The fact that the border can have such weird irregular lines is killing me.
Or even better: have I been doing it wrong all along? Maybe I shouldn't have focused so much on that initialPixel at all. Maybe a different kind of initial method would have worked? Are there any documents/examples about topics like this? I realize there is a lot of research on "computer vision" and such, but I can't find much about this particular problem.
SOME CODE
My function to retrieve a vector with all the figures:
*Note: Figure is just a class that contains some values like the background color and the number of elements.
public Figure[] getFiguresFromImage(Image image) {
Figure[] tempFigures = new Figure[100];
int numberOfFigures = 0;
matrixOfImage = image.getMatrix();
int imageBackgroundColor = matrixOfImage[0][0];
int pixel = 0;
for (int y = 0; y < matrixOfImage.length; ++y) {
for (int x = 0; x < matrixOfImage[0].length; ++x) {
pixel = matrixOfImage[y][x];
if (!exploredPixels[y][x]) {
// This pixel has not been evaluated yet
if (pixel != imageBackgroundColor ) {
// This pixel is different than the background color
// Since it is a new pixel, I assume it is the initial pixel of a new figure
// Get the figure based on the initial pixel found
tempFigures[numberOfFigures] = retrieveFigure(y,x);
++numberOfFigures;
}
}
}
}
// ** Do some work here after getting my figures **
return null;
}
Then, clearly, the function retrieveFigure(y,x) is what I am being unable to do.
Notes:
For learning purposes, I should not be using any external libraries.
A good way to solve this problem is to treat the image as a graph, where there is one node ('component' in this answer) for each color filled area.
Here is one way to implement this approach:
Mark all pixels as unvisited.
For each pixel, if the pixel is unvisited, perform the flood fill algorithm on it. During the flood fill mark each connected pixel as visited.
Now you should have a list of solid color areas in your image (or 'components'), so you just have to figure out how they are connected to each other:
Find the component that has pixels adjacent to the background color component - this is your figure border. Note that you can find the background color component by finding the component with the 0,0 pixel.
Now find the components with pixels adjacent to the newly found 'figure border' component. There will be two such components - pick the one that isn't the background (ie that doesn't have the 0,0 pixel). This is your figure background.
To find the pixel groups, simply count the number of components with pixels adjacent to the figure background component (ignoring of course the figure border component)
Advantages of this approach:
runs in O(# pixels) time.
easy to understand and implement.
doesn't assume the background color and figure background color are different.
To make sure you understand how iterating through the components and their neighbors might work, here's an example pseudocode implementation for step 5:
List<Component> allComponents; // created in step 2
Component background; // found in step 3 (this is the component with the 0,0 pixel)
Component figureBorder; // found in step 4
List<Component> pixelGroups = new List<Component>(); // list of pixel groups
for each Component c in allComponents:
if c == background:
continue;
for each Pixel pixel in c.pixelList:
for each Pixel neighbor in pixel.neighbors:
if neighbor.getComponent() == figureBorder:
c.isPixelGroup = true;
int numPixelGroups = 0;
for each Component c in allComponents:
if (c.isPixelGroup)
numPixelGroups++;
Try this code :
import java.util.Scanner;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
class Analyzer{
private int pixdata[][];
private int rgbdata[][];
private BufferedImage image;
int background_color;
int border_color;
int imagebg_color;
private void populateRGB(){
rgbdata = new int[image.getWidth()][image.getHeight()];
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
rgbdata[i][j] = image.getRGB(i, j);
}
}
int howmanydone = 0;
int prevcolor,newcolor;
prevcolor = rgbdata[0][0];
/*
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
System.out.print(rgbdata[i][j]);
}
System.out.println("");
}*/
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
newcolor = rgbdata[i][j];
if((howmanydone == 0) && (newcolor != prevcolor)){
background_color = prevcolor;
border_color = newcolor;
prevcolor = newcolor;
howmanydone = 1;
}
if((newcolor != prevcolor) && (howmanydone == 1)){
imagebg_color = newcolor;
}
}
}
}
public Analyzer(){ background_color = 0; border_color = 0; imagebg_color = 0;}
public int background(){ return background_color; }
public int border() { return border_color;}
public int imagebg() {return imagebg_color;}
public int analyze(String filename,String what) throws IOException{
image = ImageIO.read(new File(filename));
pixdata = new int[image.getHeight()][image.getWidth()];
populateRGB();
if(what.equals("background"))return background();
if(what.equals("border"))return border();
if(what.equals("image-background"))return imagebg();
else return 0;
}
}
public class ImageAnalyze{
public static void main(String[] args){
Analyzer an = new Analyzer();
String imageName;
Scanner scan = new Scanner(System.in);
System.out.print("Enter image name:");
imageName = scan.nextLine();
try{
int a = an.analyze(imageName,"border");//"border","image-background","background" will get you different colors
System.out.printf("Color bg: %x",a);
}catch(Exception e){
System.out.println(e.getMessage());
}
}
}
the color returned is ARGB format. You will need to extract R,G and B from it.
There is a bug in this code. Working on implementation using Finite State machine. in the first state you're inside the image, hence 0,0 is the background color, then when there is a change, the change is the border color, then the third state is when inside the image + inside the border and the color changes.
I'm making an avatar generator where the avatar components are from PNG files with transparency. The files are things like body_1.png or legs_5.png. The transparency is around the parts but not within them and the images are all grayscale. The parts are layering fine and I can get a grayscale avatar.
I would like to be able to colorize these parts dynamically, but so far I'm not having good luck. I've tried converting the pixel data from RGB to HSL and using the original pixel's L value, while supplying the new color's H value, but the results are not great.
I've looked at Colorize grayscale image but I can't seem to make what he's saying work in Java. I end up with an image that has fairly bright, neon colors everywhere.
What I would like is for transparency to remain, while colorizing the grayscale part. The black outlines should still be black and the white highlight areas should still be white (I think).
Does anyone have a good way to do this?
EDIT:
Here's an image I might be trying to color:
Again, I want to maintain the brightness levels of the grayscale image (so the outlines stay dark, the gradients are visible, and white patches are white).
I've been able to get a LookupOp working somewhat based on Colorizing images in Java but the colors always look drab and dark.
Here's an example of my output:
The color that was being used is this one (note the brightness difference): http://www.color-hex.com/color/b124e7
This is my lookupOp
protected LookupOp createColorizeOp(short R1, short G1, short B1) {
short[] alpha = new short[256];
short[] red = new short[256];
short[] green = new short[256];
short[] blue = new short[256];
//int Y = 0.3*R + 0.59*G + 0.11*B
for (short i = 0; i < 30; i++) {
alpha[i] = i;
red[i] = i;
green[i] = i;
blue[i] = i;
}
for (short i = 30; i < 256; i++) {
alpha[i] = i;
red[i] = (short)Math.round((R1 + i*.3)/2);
green[i] = (short)Math.round((G1 + i*.59)/2);
blue[i] = (short)Math.round((B1 + i*.11)/2);
}
short[][] data = new short[][] {
red, green, blue, alpha
};
LookupTable lookupTable = new ShortLookupTable(0, data);
return new LookupOp(lookupTable, null);
}
EDIT 2: I changed my LookupOp to use the following and got much nicer looking colors:
red[i] = (short)((R1)*(float)i/255.0);
green[i] = (short)((G1)*(float)i/255.0);
blue[i] = (short)((B1)*(float)i/255.0);
It seems what will work for you is something like this:
for each pixel
if pixel is white, black or transparent then leave it alone
else
apply desired H and S and make grayscale value the L
convert new HSL back to RGB
Edit: after seeing your images I have a couple of comments:
It seems you want to special treat darker tones, since you are not colorizing anything below 30. Following the same logic, shouldn't you also exempt from colorizing the higher values? That will prevent the whites and near-whites from getting tinted with color.
You should not be setting the alpha values along with RGB. The alpha value from the original image should always be preserved. Your lookup table algorithm should only affect RGB.
While you say that you tried HSL, that is not in the code that you posted. You should do your colorizing in HSL, then convert the resulting colors to RGB for your lookup table as that will preserve the original brightness of the grayscale. Your lookup table creation could be something like this:
short H = ??; // your favorite hue
short S = ??; // your favorite saturation
for (short i = 0; i < 256; i++) {
if (i < 30 || i > 226) {
red[i] = green[i] = blue[i] = i; // don't do alpha here
}
else {
HSL_to_RGB(H, S, i, red[i], green[i], blue[i])
}
}
Note: you have to provide the HSL to RGB conversion function. See my answer on Colorize grayscale image for links to source code.
I'm trying to compare the given pixel to Color.BLACK. But the problem is it yields false for all the images. (I made a black image and it also returned false!)
public int isItBlackOrWhite(int x , int y)
{
int c = bimg.getPixel(x, y);
if(c == Color.BLACK)
{System.out.println("Helooo");return 0;}
else
return 1;
}
Also I tried to compare it with Color.White but the application quit and force closed!
public int isItBlackOrWhite(int x , int y)
{
int c = bimg.getPixel(x, y);
if(c == Color.WHITE)
{System.out.println("Helooo");return 0;}
else
return 1;
}
NOTE: bimg is an Bitmap image taken from the camera.
First, use LogCat for printing comments and variables. (Eclipse -> Window -> Show View -> Android -> LogCat.
Then you should see the error in the Log.
That will help us to locate the error.
I don't know the specifics of the pixel formats you're using, but you're trying to compare a 32-bit integer representation of a color (most probably in ARGB format) with an object of type Color. You need to first get the ARGB representation of the Color object (probably by calling Color.getRGB() ) before comparing it to the result of getPixel().
As for the "black" image: you need another test image. Taking a picture with the camera will never get you a truly black photo. So finding truly black pixels will be hard too.
(Just add a debug statement to print the value of c to validate that. For Color.BLACK you should get -16777216, or hexadecimal 0xff000000.)
Color.BLACK is Color but not int. You need to cast them to the same type before comparing.
Ugly solution, but the answers below doesn't give actual solution to your problem. Try this:
Color white = new Color(0,0,0); // for white
Color black = new Color(255,255,255); // for black
if(yourPixel.equals(white)) { // operate }
You can also create a Color Constants class and use it accordingly and I think you can find one from the internet. If you're willing to implement that class, RGB value of colors.
The problem might be your usage of ==, which doesnt quite have the meaning you intend. In java it checks that two objects are the same object, and the color you get from your picture will never be equal to the value recorded in Color.BLACK
What you want to do is check that the color's values are the same, the red, green, blue, and alpha channels. This is a context dependent equality, which is usually implemented as the .equals() function of an object.
Try this one:
c.equals(Color.BLACK)
instead of
c == Color.BLACK