get percentage of image blur - java

Hi, so I have to make a script (doesn't matter what programming language, but i'll use Java here for example), a script that compares two black and white images and tells which one is blurred the most.
So I have to make a function like this:
function int getImageBlurPercentage()
{
ArrayList<Integer> ColorList = new ArrayList<Integer>();
//Part 1: fill ColorList with color values (0=black, 255=white)
go through Y axis
go through X axis
ColorList -> add colorValue of each pixel; [ie: 0 to 255]
//Part 2: process (This is the part where I need help !)
int lastColor = 0;
for(int color : ColorList)
{
// Something has to be done here
// To compare pixel by pixel
// and get noise result or difference result
// and convert it to a percentage (0% - 100%)
// This is where I need your help !
}
}
So this is where I need your help guys, I don't really know how to handle this.
I think this needs some math formulas which I suck at.
I would appreciate it if someone helps or gives a hint that could lead me to the right path. Thank you.

When you blur an image (let's say you use Gaussian blur), you actually doing some "averaging" on the pixels of the image, which means you make your edges "smoother".
So to check if one image has "smoother" edges then other, you can look on the Gradients of the image like Jan Dvorak suggested, but don't forget to normalize it by the amount of pixels in the image (otherwise larger images will get larger results).
If you want to check two entirely different images, the test will be much more complex, because different scenes naturally has different smoothness

Related

How can I best implement fog of war with Java?

Hello I am an inexperienced programmer and this is my first question on Stack Overflow!
I am attempting to implement 'fog of war' in my Java game. This means most of my map begins off black and then as one of my characters moves around parts of the map will be revealed. I have searched around including here and found a few suggestions and tried tweaking them myself. Each of my approaches works, however I run into significant runtime issues with each. For comparison, before any of my fog of war attempts I was getting 250-300 FPS.
Here is my basic approach:
Render my background and all objects on my JPanel
Create a black BufferedImage (fogofwarBI)
Work out which areas of my map need to be visible
Set the relevant pixels on my fogofwarBI to be fully transparent
Render my fogofwarBI, thus covering parts of the screen with black and in transparent sections allowing the background and objects to be seen.
For initialising the buffered image I have done the following in my FogOfWar() class:
private BufferedImage blackBI = loader.loadImage("/map_black_2160x1620.png");
private BufferedImage fogofwarBI = new BufferedImage(blackBI.getWidth(), blackBI.getHeight(), BufferedImage.TYPE_INT_ARGB);
public FogOfWar() {
fogofwarBI.getGraphics().drawImage(blackBI,0,0,null);
}
In each of my attempts I start the character in a middle of 'visible' terrain, ie. in a section of my map which has no fog (where my fogofwarBI will have fully transparent pixels).
Attempt 1: setRGB
First I find the 'new' coordinates in my character's field of vision if it has moved. ie. not every pixel within the character's range of sight, but just the pixels at the edge of his range of vision in the direction he is moving. This is done with a for loop, and will go through up to 400 or so pixels.
I feed each of these x and y coordinates into my FogOfWar class.
I check if these x,y coordinates are already visible (in which case I don't bother doing anything to them to save time). I do this check by maintaining a Set of Lists. Where each List contains two elements: an x and y value. And the Set is a unique set of the coordinate Lists. The Set begins empty, and I will add x,y coordinates to represent transparent pixels. I use the Set to keep the collection unique and because I understand the List.contains function is a fast way of doing this check. And I store the coordinates in a List to avoid mixing up x and y.
If a given x,y position on my fogofwarBI is not currently visible I add set the RBG to be transparent using .setRGB, and add it to my transparentPoints Set so that coordinate will not be edited again in future.
Set<List<Integer>> transparentPoints = new HashSet<List<Integer>>();
public void editFog(int x, int y) {
if (transparentPoints.contains(Arrays.asList(x,y)) == false){
fogofwarBI.setRGB(x,y,0); // 0 is transparent in ARGB
transparentPoints.add(Arrays.asList(x,y));
}
}
I then render it using
public void render(Graphics g, Camera camera) {
g.drawImage(fogofwarBI, 0, 0, Game.v_WIDTH, Game.v_HEIGHT,
camera.getX()-Game.v_WIDTH/2, camera.getY()-Game.v_HEIGHT/2,
camera.getX()+Game.v_WIDTH/2, camera.getY()+Game.v_HEIGHT/2, null);
}
Where I am basically applying the correct part of my fogofwarBI to my JPanel (800*600) based on where my game camera is.
Results:
Works correctly.
FPS of 20-30 when moving through fog, otherwise normal (250-300).
This method is slow due to the .setRGB function, being run up to 400 times each time my game 'ticks'.
Attempt 2: Raster
In this attempt I create a raster of my fogofwarBI to play with the pixels directly in an array format.
private BufferedImage blackBI = loader.loadImage("/map_black_2160x1620.png");
private BufferedImage fogofwarBI = new BufferedImage(blackBI.getWidth(), blackBI.getHeight(), BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = fogofwarBI.getRaster();
DataBufferInt dataBuffer = (DataBufferInt)raster.getDataBuffer();
int[] pixels = dataBuffer.getData();
public FogOfWar() {
fogofwarBI.getGraphics().drawImage(blackBI,0,0,null);
}
My editFog method then looks like this:
public void editFog(int x, int y) {
if (transparentPoints.contains(Arrays.asList(x,y)) == false){
pixels[(x)+((y)*Game.m_WIDTH)] = 0; // 0 is transparent in ARGB
transparentPoints.add(Arrays.asList(x,y));
}
}
My understanding is that the raster is in (constant?) communication with the pixels array, and so I render the BI in the same way as in attempt 1.
Results:
Works correctly.
A constant FPS of around 15.
I believe it is constantly this slow (regardless of whether my character is moving through fog or not) because whilst manipulating the pixels array is quick, the raster is constantly working.
Attempt 3: Smaller Raster
This is a variation on attempt 2.
I read somewhere that constantly resizing a BufferedImage using the 10 input version of .drawImage is slow. I also thought that having a raster for a 2160*1620 BufferedImage might be slow.
Therefore I tried having my 'fog layer' only equal to the size of my view (800*600), and updating every pixel using a for loop, based on whether the current pixel should be black or visible from my standard transparentPoints Set and based on my camera position.
So now my editFog Class just updates the Set of invisible pixels and my render class looks like this:
public void render(Graphics g, Camera camera) {
int xOffset = camera.getX() - Game.v_WIDTH/2;
int yOffset = camera.getY() - Game.v_HEIGHT/2;
for (int i = 0; i<Game.v_WIDTH; i++) {
for (int j = 0; j<Game.v_HEIGHT; j++) {
if ( transparentPoints.contains(Arrays.asList(i+xOffset,j+yOffset)) ) {
pixels[i+j*Game.v_WIDTH] = 0;
} else {
pixels[i+j*Game.v_WIDTH] = myBlackARGB;
}
}
}
g.drawImage(fogofwarBI, 0, 0, null);
}
So I am no longer resizing my fogofwarBI on the fly, but I am updating every single pixel every time.
Result:
Works correctly.
FPS: Constantly 1 FPS - worst result yet!
I guess that any savings of not resizing my fogofwarBI and having it smaller are massively outweighed by updating 800*600 pixels in the raster rather than around 400.
I have run out of ideas and none of my internet searching is getting me any further in trying to do this in a better way. I think there must be a way to do fog of war effectively, but perhaps I am not yet familiar enough with Java or the available tools.
And pointers as to whether my current attempts could be improved or whether I should be trying something else altogether would be very much appreciated.
Thanks!
This is a good question. I am not familar with the awt/swing type rendering, so I can only try to explain a possible solution for the problem.
From a performance standpoint I think it is a better choice to chunk/raster the FOW in bigger sections of the map rather than using a pixelbased system. That will reduce the amount of checks per tick and updating it will also take less resources, as only a small portion of the window/map needs to update. The larger the grid, the less checks, but there is a visual penalty the bigger you go.
Leaving it like that would make the FOW look blocky/pixelated, but its not something you can't fix.
For the direct surrounding of a player, you can add a circle texture with the player at its center. You can than use blending (I believe the term in awt/swing is composite) to 'override' the alpha where the circle overlaps the FOW texture. This way the pixel-based updating is done by the renderingAPI which usually uses hardware enhanced methods to achieve these things. (for custom pixel-based rendering, something like 'shader scripts' are often used if supported by the rendering API)
This is enough if you only need temporary vission in the FOW (if you don't need to 'remember' the map), you don't even need a texture grid for the FOW than, but I suspect you do want to 'remember' the map. So in that case:
The blocky/pixelated look can be fixed like they do with grid-based terain. Basically add a small additional textures/shapes based on the surroundings to make things look nice. The link below provides good examples and a detailed explanation on how to do the 'terrain-transitions' as they are called.
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/tilemap-based-game-techniques-handling-terrai-r934/
I hope this gives a better result. If you cannot get a better result, I would advise switching over to something like OpenGL for the render engine as it is meant for games, while the awt/swing API is primarely used for UI/application rendering.

Java Mandelbrot visualization questions on zooming and coloring

I am trying to program a visualisation for the Mandelbrot set in java, and there are a couple of things that I am struggling with to program. I realize that questions around this topic have been asked a lot and there is a lot of documentation online but a lot of things seem very complicated and I am relatively new to programming.
The first issue
The first issue I have is to do with zooming in on the fractal. My goal is to make an "infinite" zoom on the fractal (of course not infinite, as far as a regular computer allows it regarding calculation time and precision). The approach I am currently going for is the following on a timer:
Draw the set using some number of iterations on the range (-2, 2) on the real axis and (2, 2) on the imaginary axis.
Change those ranges to zoom in.
Redraw that section of the set with the number of iterations.
It's the second step that I struggle with. This is my current code:
for (int Py = beginY; Py < endY; Py++) {
for (int Px = beginX; Px < endX; Px++) {
double x0 = map(Px, 0, height,-2, 2);
double y0 = map(Py, 0, width, -2, 2);
Px and Py are the coordinates of the pixels in the image. The image is 1000x1000. The map funtion takes a number, in this case Px or Py, with a range of (0, 1000) and devides it evenly over the range (-2, 2), so it returns the corresponding value in that range.
I think that in order to zoom in, I'll have to change the -2 and 2 values by some way in the timer, but whatever I try, it doesn't seem to work. The zoom always ends up slowing down after a while or it will end up zooming in on a part of the set that is in the set, so not the borders. I tried multiplying them by some scale factor every timer tick, but that doesn't really produce the result I was looking for.
Now I have two questions about this issue.
Is this the right approach to visualizing the set and zooming in(draw, change range, redraw)?
If it is, how do I zoom in properly on an area that is interesting and that will keep zooming in properly even after running for a minute?
The second issue
Of course when visualizing something, you need to get some actual visual thing. In this case I want to color the set in a way similar to what you see here: (https://upload.wikimedia.org/wikipedia/commons/f/fc/Mandel_zoom_08_satellite_antenna.jpg).
My guess is that you have use the amount of iterations a pixel went through to before breaking out of the loop to give it some color value. However, I only really know how to do this with a black and white color scheme. I tried making a color array that holds the same amount of different gray colors as the amount of max iterations, starting from black and ending in white. Here is my code:
Color[] colors = new Color[maxIterations + 2];
for (int i = 0; i < colors.length; i++) {
colors[i] = new Color((int)map(i, 0, maxIterations + 2, 0, 255),
(int)map(i, 0, maxIterations + 2, 0, 255),
(int)map(i, 0, maxIterations + 2, 0, 255));
}
I then just filled in the amount of iterations in the array and assigned that color to the pixel. I have two questions about this:
Will this also work as we zoom into the fractal in the previously described manner?
How can I add my own color scheme in this, like in the picture? I've read some things about "linear interpolation" but I don't really understand what it is and in what way it can help me.
It sounds like you've made a good start.
Re the first issue: I believe there are ways to automatically choose an "interesting" portion of the set to zoom in on, but I don't know what they are. And I'm quite sure it involves more than just applying some linear function to your current bounding rectangle, which is what it sounds like you're doing.
So you could try to find out what these methods are (might get mathematically complicated), but if you're new to programming, you'll probably find it easier to let the user choose where to zoom. This is also more fun in the beginning, since you can run your program repeatedly and explore a new part of the set each time.
A simple way to do this is to let the user draw a rectangle over the image, and use your map function to convert the pixel coordinates of the drawn rectangle to the new real and imaginary coordinates of your zoom area.
You could also combine both approaches: once you've found somewhere you find interesting by manually selecting the zoom area, you can set this as your "final destination", and have the code gradually and smoothly zoom into it, to create a nice movie.
It will always get gradually slower though, as you start using ever more precise coordinates, until you reach the limits of precision with double and it becomes a pixellated mess. From there, if you want to zoom further, you'll have to look into arbitrary-precision arithmetic with BigDecimal - and it will continue to get slower and slower.
Re the second issue: starting off by calculating a value of numIterations / maxIterations (i.e. between 0 and 1) for each pixel is the right idea (I think this is basically what you're doing).
From there, there are all sorts of ways to convert this value to a colour, it's time to get creative!
A simple one is to have an array of a few very different colours. E.g. if you had white (0.0), red (0.25), green (0.5), blue (0.75), black (1.0), then if your calculated number was exactly one of the ones listed, you'd use the corresponding colour. If it's somewhere between, you blend the colours, e.g. for 0.3 you'd take:
((0.5-0.3)*red + (0.3-0.25)*green) / (0.5 - 0.25)
= 0.8*red + 0.2*green
Taking a weighted average of two colours is something I'll leave as an exercise ;)
(hint: take separate averages of the r, g, and b values. Playing with the alpha values could maybe also work).
Another one, if you want to get more mathsy, is to take an equation for a spiral and use it to calculate a point on a plane in HSB colour space (you can keep the brightness at some fixed value, say 1). In fact, any curve in 2D or 3D which you know how to write as an equation of one real variable can be used this way to give you smoothly changing colours, if you interpret the coordinates as points in some colour space.
Hope that's enough to keep you going! Let me know if it's not clear.

find archery target in image of different perspectives

I'm trying to find a way to identify an archery target and all of its rings on a photo which might be made of different perspectives:
My goal is to identify the target and later on also where the arrows hit the target to automatically count their score. Presumptions are as follows:
The camera's position is not fixed and might change
The archery target might also move or rotate slightly
The target might be of different size and have different amount of circles
There might be many holes (sometimes big scratches) in the target
I have already tried OpenCV to find contours, but even with preprocessing (grayscale -> blur (-> threshold) -> edge detection) I still find a few houndred contours which are all distracted by the arrows or other obstacles (holes) on the target, so it is impossible to find a nice circular line. Using Hough to find circles doesn't work either as it will give me weired results as Hough will only find perfect circles and not ellipses.
With preprocessing the image this is my best result so far:
I was thinking about ellipse and circle fitting, but as I don't know radius, position and pose of the target this might be a very cpu consuming task. Another thought was about using recognition from a template, but the position and rotation of the target changes often.
Now I have the idea to follow every line on the image to check if it is a curve and then guess which curves belong together to form a circle/ellipse (ellipse because of the perspective). The problem is that the lines might be intersected by arrows or holes in a short distance so the line would be too short to check if it is a curve. With the smaller circles on the target the chance is high that it isn't recognised at all. Also, as you can see, circle 8, 7 and 6 have no clear line on the left side.
I think it is not neccessary to do perspective correction to achieve this task as long as I can clearly identify all the rings in the target.
I googled a long time and found some thesis which are all not exactly focussed on this specific task and also too mathematical for me to understand.
Is it by any chance possible to achieve this task? Could you share with me an idea how to solve this problem? Anything is very appreciated.
I'm doing this in Java, but the programming language is secondary. Please let me know if you need more details.
for starters see
Detecting circles and shots from paper target.
If you are using standardized target as on the image ( btw. I use these same too for my bow :) ) then do not cut off the color. You can select the regions of blue red and yellow pixels to ease up the detection. see:
footprint fitting
From that you need to fit the circles. But as you got perspective then the objects are not circles nor ellipses. You got 2 options:
Perspective correction
Use right bottom table rectangle area as marker (or the whole target). It is rectangle with known aspect ratio. so measure it on image and construct transformation that will change the image so it became rectangle again. There are tons of stuff about this: 3D scene reconstruction so google/read/implement. The basic are based just on De-skew + scaling.
Approximate circles by ellipses (not axis aligned!)
so fit ellipses to found edges instead circles. This will not be as precise but still close enough. see:
ellipse fitting
[Edit1] sorry did not have time/mood for this for a while
As you were unable to adapt my approach yourself here it is:
remove noise
you need to recolor your image to remove noise to ease up the rest... I convert it to HSV and detect your 4 colors (circles+paper) by simple tresholding and recolor the image to 4 colors (circles,paper,background) back into RGB space.
fill the gaps
in some temp image I fill the gaps in circles created by arrows and stuff. It is simple just scan pixels from opposite sides of image (in each line/row) and stop if hit selected circle color (you need to go from outer circles to inner not to overwrite the previous ones...). Now just fill the space between these two points with your selected circle color. (I start with paper, then blue,red and yellow last):
now you can use the linked approach
So find avg point of each color, that is approx circle center. Then do a histogram of radius-es and chose the biggest one. From here just cast lines out of the circle and find where the circle really stops and compute the ellipse semi-axises from it and also update the center (that handles the perspective distortions). To visually check I render cross and circle for each circle into the image from #1:
As you can see it is pretty close. If you need even better match then cast more lines (not just 90 degree H,V lines) to obtain more points and compute ellipse algebraically or fit it by approximation (second link)
C++ code (for explanations look into first link):
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
// pic2 - temp
DWORD c0;
int x,y,i,j,n,m,r,*hist;
int x0,y0,rx,ry; // ellipse
const int colors[4]=// color sequence from center
{
0x00FFFF00, // RGB yelow
0x00FF0000, // RGB red
0x000080FF, // RGB blue
0x00FFFFFF, // RGB White
};
// init output as source image and resize temp to same size
pic1=pic0;
pic2=pic0; pic2.clear(0);
// recolor image (in HSV space -> RGB) to avoid noise and select target pixels
pic1.rgb2hsv();
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
color c;
int h,s,v;
c=pic1.p[y][x];
h=c.db[picture::_h];
s=c.db[picture::_s];
v=c.db[picture::_v];
if (v>100) // bright enough pixels?
{
i=25; // treshold
if (abs(h- 40)+abs(s-225)<i) c.dd=colors[0]; // RGB yelow
else if (abs(h-250)+abs(s-165)<i) c.dd=colors[1]; // RGB red
else if (abs(h-145)+abs(s-215)<i) c.dd=colors[2]; // RGB blue
else if (abs(h-145)+abs(s- 10)<i) c.dd=colors[3]; // RGB white
else c.dd=0x00000000; // RGB black means unselected pixels
} else c.dd=0x00000000; // RGB black
pic1.p[y][x]=c;
}
pic1.save("out0.png");
// fit ellipses:
pic1.bmp->Canvas->Pen->Width=3;
pic1.bmp->Canvas->Pen->Color=0x0000FF00;
pic1.bmp->Canvas->Brush->Style=bsClear;
m=(pic1.xs+pic1.ys)*2;
hist=new int[m]; if (hist==NULL) return;
for (j=3;j>=0;j--)
{
// select color per pass
c0=colors[j];
// fill the gaps with H,V lines into temp pic2
for (y=0;y<pic1.ys;y++)
{
for (x= 0;(x<pic1.xs)&&(pic1.p[y][x].dd!=c0);x++); x0=x;
for (x=pic1.xs-1;(x> x0)&&(pic1.p[y][x].dd!=c0);x--);
for (;x0<x;x0++) pic2.p[y][x0].dd=c0;
}
for (x=0;x<pic1.xs;x++)
{
for (y= 0;(y<pic1.ys)&&(pic1.p[y][x].dd!=c0);y++); y0=y;
for (y=pic1.ys-1;(y> y0)&&(pic1.p[y][x].dd!=c0);y--);
for (;y0<y;y0++) pic2.p[y0][x].dd=c0;
}
if (j==3) continue; // do not continue for border
// avg point (possible center)
x0=0; y0=0; n=0;
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
if (pic2.p[y][x].dd==c0)
{ x0+=x; y0+=y; n++; }
if (!n) continue; // no points found
x0/=n; y0/=n; // center
// histogram of radius
for (i=0;i<m;i++) hist[i]=0;
n=0;
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
if (pic2.p[y][x].dd==c0)
{
r=sqrt(((x-x0)*(x-x0))+((y-y0)*(y-y0))); n++;
hist[r]++;
}
// select most occurent radius (biggest)
for (r=0,i=0;i<m;i++)
if (hist[r]<hist[i])
r=i;
// cast lines from possible center to find edges (and recompute rx,ry)
for (x=x0-r,y=y0;(x>= 0)&&(pic2.p[y][x].dd==c0);x--); rx=x; // scan left
for (x=x0+r,y=y0;(x<pic2.xs)&&(pic2.p[y][x].dd==c0);x++); // scan right
x0=(rx+x)>>1; rx=(x-rx)>>1;
for (x=x0,y=y0-r;(y>= 0)&&(pic2.p[y][x].dd==c0);y--); ry=y; // scan up
for (x=x0,y=y0+r;(y<pic2.ys)&&(pic2.p[y][x].dd==c0);y++); // scan down
y0=(ry+y)>>1; ry=(y-ry)>>1;
i=10;
pic1.bmp->Canvas->MoveTo(x0-i,y0);
pic1.bmp->Canvas->LineTo(x0+i,y0);
pic1.bmp->Canvas->MoveTo(x0,y0-i);
pic1.bmp->Canvas->LineTo(x0,y0+i);
//rx=r; ry=r;
pic1.bmp->Canvas->Ellipse(x0-rx,y0-ry,x0+rx,y0+ry);
}
pic2.save("out1.png");
pic1.save("out2.png");
pic1.bmp->Canvas->Pen->Width=1;
pic1.bmp->Canvas->Brush->Style=bsSolid;
delete[] hist;

Most used tones in a picture by checking the HSV color

I hope you can help me.
I am taking a picture in Android, generating a thumbnail and, using this thumbnail, I analyze the pixels to get the most used tones.
I thought that using RGB would be hard to group them, so I turned every pixel color into HSV color. As you see, I use a Float3, so that hsv.x is the same as hsv.h, hsv.y is hsv.s and hsv.z equals hsv.v.
scores is a int[] array, and I store there the amount of pixels of each tone. This algorithm works pretty well when finding blue tones (aquamarine, light blue and dark blue), but it's hard to recognize monochromes (it moves them to the yellow score) and warm colors (it "confuses" orange with yellow). Also, as a last question, I don't know how to recognize brown (which is the last "big color" I think I missed. Here comes the algorithm:
for (Float3 hsv: hsvs) {
if(hsv.y < 0.15)
{
if(hsv.z < 0.2)
scores[BLACK]++;
else if(hsv.z < 0.6)
scores[GREY]++;
else
scores[WHITE]++;
}
else if (hsv.x < 15f || hsv.x > 345f)
scores[RED]++;
else if (hsv.x < 40)
scores[ORANGE]++;
else if (hsv.x < 70)
scores[YELLOW]++;
else if (hsv.x < 120)
scores[GREEN]++;
else if (hsv.x < 160)
scores[AQUAMARINE]++;
else if (hsv.x < 200)
scores[LIGHT_BLUE]++;
else if (hsv.x < 240)
scores[DARK_BLUE]++;
else if (hsv.x < 300)
scores[PURPLE]++;
else
scores[PINK]++;
}
Do you think I can improve the algorithm by changing some numbers, or should I start with a different approach?
Edit:
Let me put you in context. My app takes a picture, grabs a square thumbnail (for practical reasons) and analyzes the resulting picture. Because it is a thumb, it loses some detail and dimensions (say I create 200x200px thumbs), so in my particular case, there will always be 40000 pixels. What I do with this image is to check the most used tones.
By most used, I define them as tones that appear in more than 3000 pixels (about 7.5% of the image). I do not rank them, I just check some checkboxes named after an arbitrary amount of colors (which will probably be the ones shown to you plus brown). Then, the user checks or unchecks the colors he considers appropriate. In fact, and as Geobits pointed out, this is a human problem and most used colors doesn't mean most relevant colors. That is why this tool doesn't need to be perfect, just to avoid failing at selecting colors that don't fit the pack at all.
If I were doing it, instead of making each pixel fall into one of several buckets, I'd see how far each hue was from each color. For instance, you could take the difference from each "target hue", and sum up all the differences. Whichever has the lowest total after looping through all the pixels should be the "most used". Of course it's not perfect, but naming colors is not a trivial task for a computer.
For example, to get the total for "green" (hue 120 in my arbitrary world):
float runningTotalForGreen = 0;
for(Float3 hsv: hsvs)
{
float diff = (hsv.x - 120) % 180; // mod 180 to normalize cw/ccw on the wheel
runningTotalForGreen += diff; // lower = closer to green
}
You'll probably want to store your target colors and totals in arrays for easier looping, but that's the general idea.
Edit:
The reason I think this will work better than "buckets" is this: Consider a picture, where about 35% of the picture is red. The other 65% is somewhere on the border of light/dark blue. So, say 34% falls into the light blue bucket, 31% falls into dark blue. Your method says it's red, since 35% is greater than both. Using the hue difference will most likely return one of the blues.
Of course, most methods will have some image that it fails for. The key is finding the one that errors least. That depends a lot on what type of image it is. I do agree you'll need special handling for certain colors(black/white/brown/etc).
The main issue here is that it's a human problem. For my example image above, some people would say light blue. Some would say dark blue. Some might even say red, depending on how vivid/contrasting it is, especially if the blue was the background with red in the fore. Saying "this picture is x color" isn't ever consistent unless it's basically monochrome.
Can't say whether this is a good approach or not. Check out this page on Color Names. When you click on the colors in the wheel it will give you the hue range it uses to define the color name.

Add a certain amount to the 'red' value of RGB to create a sunset affect, help please. (java)

Ok, so I want a program that goes through a picture line by line, and adds a certain amount to the red value (RGB) to create a sunset affect. The only problem is that, when you get the different values for red, green, and blue, I cannot add, lets say, 50 to the red value to get a sunset affect. The code below is only the part that is responsible for looping through the lines and changing the pixel values.
for(int y=0; y < sunsetPic.getHeight(); y++)
{
for(int x = 0; x < sunsetPic.getWidth(); x++)
{
targetPixel = sunsetPic.getPixel(x,y);
pixelColor = targetPixel.getColor();
redValue = pixelColor.getRed();
greenValue = pixelColor.getGreen();
blueValue = pixelColor.getBlue();
pixelColor = new Color(redVlue + 50, greenValue, blueValue);
targetPixel.setColor(pixelColor);
}
}
As you can see, I cannot just add 50 to the redValue to create a sunset affect. Can someone please help me by making a way I can get my sunset affect?
To achieve a sunset effect, you need to do a bit more than add a bit of red. Odds are good you'll need to remove a bit of green and blue too. Those removals will likely be proportional removals, leaving a percentage of the original color present. The most flexible technique to go about this is to use one or more color matrices. That way you can independently adjust each output color based on a linear combination of the input colors. Generally, you include the A color, which means that most color matrices are 4x5, with the fifth element being a constant added or subtracted regardless of input.
A code example is here, and depending on your need for fidelity, you can tweak the transformation matrix as many times as you like until you get the visual effects you are seeking.
If you can, load the image into a paint program like Gimp or Photoshop and use that to edit and preview color changes. Once you get the look you want use the percentages of RGB you arrived at and those will be your runtime changes.
I'd suggest using a multiplier instead of an addition and I'd suggest not boosting your Red above 1.0 but follow Edwin Buck's idea of multiplying down Green and Blue.

Categories

Resources