Finding the average color of an Image - java

I am using a BufferedImage to hold a 10 by 10 sample of an image. With this Image I would like to find an approximate average color (as a Color object) that represents this image. Currently I have two ideas on how to implement this feature:
Make a scaled instance of the image into a 1 by 1 size image and find the color of the newly created image as the average color
Use two for loops. The inner-most is used to average each line, the secondary for-loop is used to average each line pixel by pixel.
I really like the idea of the first solution, however I am not sure how accurate it would be. The second solution would be as accurate as they come, however it seems incredibly tedious. I also believe the getColor command is processor intensive on a large scale such as this (I am performing this averaging roughly at 640 to 1920 times a second), please correct me if I am wrong. Since this method will be very CPU intensive, I would like to use a fairly efficient algorithm.

It depends what you mean by average. If you have half the pixels red and half the pixels blue, would the average be purple? In that case I think you can try adding all the values up and dividing it by how many pixels you have.
However, I suspect that rather than the average, you want the dominant colour?
In that case one alternative could be to discretise the colours into 'buckets' (say at intervals of 100, or even more sparser in the extreme case just 3, one for Red, one for Green and one for Blue), and create a histogram (a simple array with counts). You would then take the bucket which has the most count.
Be careful with idea 1. Remember that scaling often takes place by sampling. Since you have a very small image, you have already lost a lot of information. Scaling down further will probably just sample a few pixels and not really average all of them. Better check what algorithm your scaling process is using.

Related

Mixing "Color"-values in java for image scaling with different portions

I have an Image in the form of Color[][] array. It's kind of big so I want to scale it down: into another, smaller Color[][] array.
The problem is, that in this case I need to find the average color value of up to nine (probably not more) of the original images pixels, by the portions of how much space they take up in a certain pixel in the downscaled version of the image.
One of my (probably overcomplicated ideas) is to make it an RGB value and take the averages of all values, which might turn out to be a problem with the exact portions and colors not working that way.
It'd be nice to not have a huge code but instead just a simple solution or the hint that there isn't a short solution so that I can find a workaround.

Compare image to a reference image in Java

I need to write tests (in Java) for an image capturing application. To be more precise:
Some image is captured from a scanner.
The application returns a JPEG of this image.
The test shall compare the scanned image with a reference image. This reference image has been captured by the identical application and has been verified visually that it contains the same content.
My first idea was comparing the images pixel by pixel but as the application applies a JPG compression the result would never by "100 % identical". Besides the fact that the scanner would capture the image with slight differences each time.
The goal is not to compare two "random images" for similarity like "Do both images show a car?" but rather "How similar is the captured car to the reference image of this car?".
What other possibilities do you see?
Let me say first, that image processing is not my best field, nor am I an expert in any way. However, here is my suggestion: take the absolute value of each pixel in the pictures, in row-major order, respective to each other. Then, average out the differences, and see if the average is within a certain threshold: if so, the images are similar in nature. Another way, would be to go once more pixel by pixel, but instead count the number of pixels that differ by at least x, where x is a number chosen to represent how close the matching needs to be. Then, just see if the different pixels comprise 5% of all pixels, 10%, and so on. The more pixels, the more different the image.
I hope this helps you, and best of luck.P.S. to judge how far two pixels are apart, it's going to be like finding the distance between two points (i.e. sqrt[(r2-r1)^2+(g2-g1)^2+(b2-b1)^2]).P.P.S. This all of course assumes that you listened to the Hovercraft Full of Eels, and have somehow made sure that the images match up in size and position.

Need Conceptual Help Rendering a Heat Map

I need to create a heatmap for android google maps. I have geolocation and points that have negative and positive weight attributed to them that I would like to visually represent. Unlike the majority of heatmaps, I want these positive and negative weights to destructively interfere; that is, when two points are close to each other and one is positive and the other is negative, the overlap of them destructively interferes, effectively not rendering areas that cancel out completely.
I plan on using the android google map's TileOverlay/TileProvider class that has the job of creating/rendering tiles based a given location and zoom. (I don't have an issue with this part.)
How should I go about rendering these Tiles? I plan on using java's Graphics class but the best that I can think of is going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel. This seems very inefficient, however, and I was looking for suggestions on a better approach.
Edit: I've considered everything from using a non-android Google Map inside of a WebView to using a TileOverlay to using a GroundOverlay. What I am now considering doing is having a large 2 dimensional array of "squares." Each square would have a long, lat, and total +/- weights. When a new data point is added, instead of rendering it exactly where it is, it will be added to the "square" that it is in. The weight of this data point will be added to the square and then I will use the GoogleMap Polygon object to render the square on the map. The ratio of +points to -points will determine the color that is rendered, with a ratio closer to 1:1 being clear, >1 being blue (cold point), and <1 being red (hot point).
Edit: a.k.a. clustering the data into small regional groups
I suggest trying
going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel.
Even if it slow, it will work. There are not too many Tiles on the screen, there are not too many pixels in each Tile and all this is done on a background thread.
All this is still followed by translating Bitmap into byte[]. The byte[] is a representation of PNG or JPG file, so it's not a simple pixel mapping from Bitmap. The last operation takes some time too and may possibly require more processing power than your whole algorithm.
Edit (moved from comment):
What you describe in the edit sounds like a simple clustering on LatLng. I can't say it's a better or worse idea, but it's something worth a try.

drawCircle vs drawBitmap

I'm planning on implementing a new set of figures in my game: plain circles. The number of drawn sprites (in this case circles) starts with 2-3, and can go up endlessly (potentially). The maximum will probably be around 60 though. In total there will have to be 5 types of circles, each with a different color and probably size too. Now seeing as I won't implement it until monday I thought I'd ask it at stackoverflow.
Does anybody already know which method is faster?
Bitmaps are almost always faster than any kind of draw. With the right preparation drawing a bitmap is simply dumping memory to the screen. Drawing a circle involves a significant number of calculations, including anti-aliasing. I presented a paper which covered this at JavaOne 2009, but papers that old seem to have been removed from the site.
It does depend on how big your bitmap would need to be, but for sizes under 10 pixels bitmap sprites are much faster than even simple graphic operations like drawing crosses and lines. You also need to make sure that your sprite won't require any kind of transform when it is drawn, and that it is a form compatible with the screen memory.
If every circle is to be a different color or thickness, or worse a different size, then that's another matter. The cost of creating each bitmap would outweigh the savings.
You should also remember the first rule of optimization: don't do it unless you have to.

FFT Image to Measure Similarities

Ok I'm writing a small Java app that accepts two images as inputs, compares them, then gives a quantitative output as a measure of similarity (eg. 50% similar).
To my understanding FFT is a good way to measure similarity of two images. But I can't for the love of god figure out how to code/implement it.
So far I've implemented another function which basically gives me two histograms (one for each image). All I need now is to write a method that will FFT an image and give me a quantifiable outcome.
Can anyone help me out with this? I'd really like to see some sample codes, if not at least a point in the right direction. Much thanks in advance.
Similarity is not an exact term. For example: if you have circle, and an ellipse are they similar? They are both round objects, so in this sense they are - but if we want to filter out circles only they are not. You will have to define a measure (or measures - for example roundness, intensity distribution, size, orientation, number of objects, euler number, etc.), than calculate it for each image. The similarity of the two images will be (some kind of) distance between the two calculated values. This could be euclidean distance (for two real measures), or some kind of error function (RMS for intensity distributions).
You will have to choose to which transforms should your measure stay invariant (is the rotated image similar to the original? If yes, simple fourier transform is not appropriate).
Measuring similarity of an image is hard, if you have to do that I would read about image stitching. If you just need to distinguish BLOB-s, first try to calculate some simple measures (I recommend calculating moments - area, orientation; read K-means clusteing), or 1D fourier transform of the distance of the contour from the center of the mass (whic is a little bit more difficult).
Before you attempt to code up a 2DFT, you should fully understand the math behind it. flolo is correct that you can compute it by first doing a 1D FFT on the rows and columns and then combining the results, but I have no reason to believe the L_inf norm is the best way to convert them to a metric, since it completely skips the usual combining step to create the full 2DFT. Take a look at http://fourier.eng.hmc.edu/e101/lectures/Image_Processing/node6.html at the very bottom of the page.
That said, there may be better ways to compare images that don't require comparing 2D arrays of information. For instance, PCA (Principal Component Analysis, which is just a matter of running SVD {Singular Value Decomposition} on your images after mean-centering them, though I'd take a look at the wikipedia article on it first) will give you a 1D vector which you could then apply some L_p norm to directly to compare, although in this case, i would use something like sum(min(a_i/b_i , b_i/a_i))/length(a), where a and b are the 1D vectors you got from the transform.
There are many good sites with code for a fft on an 1-D array of values. You just apply this fft row by row on your image. And afterwards you do fft columnwise on the results.
Now you need a metric to get from the resulting transformed image, my suggestion would be to try the max-norm (L_inf). That is max_{x,y}{fft2d(imag1)[x,y] - fft2d(imag2)[x,y]}.
If you just want to check if it is likely that one image is a quick edit of another for something like DRM of stock photography then check the percentages of a normalized color palette within probable regions. If they match within an THRESHOLD for a NUMBER_OF_TEST_COLORS in any one of a number of TEST_REGIONS within the image then you have a "suspect"... you still need a human to check the suspects. But this is a quick and dirty way to find many of the image re-sizers, horiz/vert flippers, and background color changers, file format changers, and other subtle variations... of course "normalizing the colors" to a quantized palette is an art unto itself. I would recommend quantizing images into nearest "web safe" colors for practicality.
I'm a blue collar garbage man in comparison to a mathematician, but garbage men are quite practical! I have had good success with this kind of approach in grouping similar images and search by color applications.

Categories

Resources