How to take a group of pixels and combine them into one - java

Im trying to create a program that finds images that are similar to each other and i found a site ( http://www.hackerfactor.com/blog/index.php?/archives/432-Looks-Like-It.html ) that gives steps for making a function that creates a fingerprint of an image, the first step is to reduce the size of the image to a 8 by 8 ( 64 pixel ) image, but i cant figure out how to convert a group of pixels into one pixel e.g.
[(R,G,B)][(R,G,B)][(R,G,B)]
[(R,G,B)][(R,G,B)][(R,G,B)]
[(R,G,B)][(R,G,B)][(R,G,B)]
take this group of pixels, each pixel has a diffrent R, G and B value, how can i take them all and turn them into one set of values e.g.
[(R,G,B)]
I thought maybe add all the R, G and B values up and then average them but that seemed to simple, dose anyone know how to do this ? i am writing this program in java.

There are a lot of different interpolation/re-sampling techniques to do downscaling - you can choose one depending on what results you're expecting. A simple one i.e. is the Nearest neighbour interpolation: But this wouldn't lead to very detailed results, due to the simplicity.
More advanced techniques i.e. linear interpolation, biliniear interpolation or bicubic interpolation are way better suitable, if the pictures are actually photos (rather than i.e. pixelart). But the downscaled image in the link hasn't much details left either - so Nearest neighbor seems quite sufficient (at least to start with).
public int[] resizePixels(int[] pixels,int w1,int h1,int w2,int h2) {
int[] temp = new int[w2*h2] ;
double x_ratio = w1/(double)w2 ;
double y_ratio = h1/(double)h2 ;
double px, py ;
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
px = Math.floor(j*x_ratio) ;
py = Math.floor(i*y_ratio) ;
temp[(i*w2)+j] = pixels[(int)((py*w1)+px)] ;
}
}
return temp ;
}
This java function takes an array of pixel values (original size - w1 and h1) and returns an nearest neighbour (up/down)-scaled array of pixels with dimensions w2 x h2. See also: here.

Related

How to check if 2 images are similar with respect to a reference object?

In my application I want to check whether an image taken has the same reference object that a previous image had.
For eg. I capture an image having a pole besides an open land and say after few months I capture standing at the same position and I now get an image of the pole and some building coming up. I want to check if the images are similar in this respect.
We cannot check object , but other way is that we can save our image to the drawable and then we can check the bytes of the image , if bytes are same then image is same
Try to compare with bytes or pixel is the only way that generally works.
drawable1.bytesEqualTo(drawable2)
drawable1.pixelsEqualTo(drawable2)
bitmap1.bytesEqualTo(bitmap1)
bitmap1.pixelsEqualTo(bitmap2)
Option A: Calculate a simplified histogram (8 bits). Sort your images by histogram similarity.
I used this in a image-processing class. It's a pretty forgiving way to compare images. Small changes in lighting or orientation won't break the comparison. You can even flip the image 90° or 180°.
I still have the code, even if it was matlab, it may be helpful:
// image to compare:
QueryPicture = imread(strcat('E:\Users\user98\Images\',num2str(floor(result/10)),num2str(mod(result,10)),'.jpg'));
sums = zeros(50,2);
h = zeros(64,50);
// do a simpified histogram:
for n=1:50
img=double(imread(strcat('E:\Users\user98\Images\',num2str(floor(n/10)),num2str(mod(n,10)),'.jpg')));
for i=1:200
for j=1:200
x1 = bitshift(img(i,j,1), -6);
x2 = bitshift(img(i,j,2), -6) * 4;
x3 = bitshift(img(i,j,3), -6) * 16;
x = x1 + x2 + x3;
h(x+1, n) = (h(x+1, n) + 1);
end
end
end
// compare histograms:
for n=1:50
tmpVec = zeros(64,1);
for i=1:64
tmpVec(i) = abs(h(i,n) - h(i,result));
end
for j=1:64
sums(n,1) = sums(n,1)+tmpVec(j);
end
sums(n,2) = n;
end
sortedImages = sortrows(sums,1)
// Show compare-image:
subplot(2,3,1); image(uint8(QueryPicture));
// show 3 best matches for compare-image:
sortedImages(1);
sortedImages(2);
sortedImages(3);
img1=double(imread(strcat('E:\Users\user98\Images\',num2str(floor(sortedImages(2,2)/10)),num2str(mod(sortedImages(2,2),10)),'.jpg')));
img2=double(imread(strcat('E:\Users\user98\Images\',num2str(floor(sortedImages(3,2)/10)),num2str(mod(sortedImages(3,2),10)),'.jpg')));
img3=double(imread(strcat('E:\Users\user98\Images\',num2str(floor(sortedImages(4,2)/10)),num2str(mod(sortedImages(4,2),10)),'.jpg')));
subplot(2,3,4); image(uint8(img1));
subplot(2,3,5); image(uint8(img2));
subplot(2,3,6); image(uint8(img3));
Option B::
Use the gps data from the image (exif) if available.
Option C::
Google Lens is an app that can classify images and detect objects.

How does one correctly apply filters to an image array?

My question does not refer to what operators I need to use to manipulate matrices, but rather what is actually being sought by doing this procedure.
I have, for example, an image in matrix form on which I need to perform several operations (this filter is one of them). After converting said image to grayscale, I need to apply the following filter
float[][] smoothKernel = {
{0.1f,0.1f,0.1f},
{0.1f,0.2f,0.1f},
{0.1f,0.1f,0.1f}
};
on it.
The assignment file gives this example , so I assumed that when asked to "smooth" the image, I had to replace every individual pixel with an average of its neighbors (while also making sure special cases such as corners or side were handled properly).
The basic idea is this:
public static float[][] filter(float[][] gray, float[][] kernel) {
// gray is the image matrix, and kernel is the array I specifed above
float current = 0.0f;
float around = 0.0f;
float[][] smooth = new float[gray.length][gray[0].length];
for (int col = 0; col < gray.length; col++) {
for (int row = 0; row < gray[0].length; row++) {
//first two for loops are used to do this procedure on every single pixel
//the next two call upon the respective pixels around the one in question
for (int i = -1; i < 2; i++) {
for (int j = -1; j < 2; j++) {
around = at(gray, i + col, j + row); //This calls a method which checks for the
//pixels around the one being modified
current += around * kernel[i+1][j+1];
//after the application of the filter these are then added to the new value
}
}
smooth[col][row] = current;
current = 0.0f;
//The new value is now set into the smooth matrix
}
}
return smooth;
}
My dilemma lies in if I have to create this new array float[][] smooth; so as to avoid overriding the values of the original (the image outputted is all white in this case...). From the end product in the example I linked above I just cannot understand what is going on.
What is the correct way of applying the filter? Is this a universal method or does it vary for different filters?
Thank you for taking the time to clarify this.
EDIT: I have found the two errors which I detailed in the comments below, implemented back into the code, everything is working fine now.
I have also been able to verify that some of the values in the example are calculated incorrectly (thus contributing to my confusion), so I will be sure to point it out in my next class.
Question has been solved by ulterior methods, I am however not deleting it in hopes other people can benefit from it. The original code can be found in the edits.
A more advanced colleague of mine helped me to note that I was missing two things: one was the issue with resetting the current variable after computing the "smoothed" variables in the new array (resulting in a white image because this value would get increasingly larger thus surpassing the binary color limit, so it was set to the max). The second issue was that I was continuously iterating on the same pixel, which caused the whole image to have the same color (I was iterating the new array). So I added these specifications in, and all works fine since.

Step through image pixels to get values

So I am looking to calculate the values of each pixel in an image. I will read in an image and then i need to split this image into a number of windows and then move through each pixel within each window and calculate the value of these pixels.
Is there any ideas on how i would go about this or any useful links you think may help.
I have the following outline of pseudo code if this makes it clearer
Int w=8;
Int n=256;
Int nblock=n/w;
Double s=0;
For (int i=0;i<nblock;i++) {
For (int j=0;j<nblock;j++) {
// put the gray value of the content of the block (i,j) into an array for both images (one array per image)
// compute the mean, sd, covariance for the arrays you got
// use the equation of the structural similarity measure to get a value v en.wikipedia.org/wiki/Structural_similarity
s+=v;
}
}
v=v/(nblock*nblock)

Java Recursion Triangle with Deviation

Hello I am fairly new to programming and I am trying, in Java, to create a function that creates recursive triangles from a larger triangles midpoints between corners where the new triangles points are deviated from the normal position in y-value. See the pictures below for a visualization.
The first picture shows the progression of the recursive algorithm without any deviation (order 0,1,2) and the second picture shows it with(order 0,1).
I have managed to produce a working piece of code that creates just what I want for the first couple of orders but when we reach order 2 and above I run into the problem where the smaller triangles don't use the same midpoints and therefore looks like the picture below.
So I need help with a way to store and call the correct midpoints for each of the triangles. I have been thinking of implementing a new class that controls the calculation of the midpoints and stores them and etc, but as I have said I need help with this.
Below is my current code
The point class stores a x and y value for a point
lineBetween creates a line between the the selected points
void fractalLine(TurtleGraphics turtle, int order, Point ett, Point tva, Point tre, int dev) {
if(order == 0){
lineBetween(ett,tva,turtle);
lineBetween(tva,tre,turtle);
lineBetween(tre,ett,turtle);
} else {
double deltaX = tva.getX() - ett.getX();
double deltaY = tva.getY() - ett.getY();
double deltaXtre = tre.getX() - ett.getX();
double deltaYtre = tre.getY() - ett.getY();
double deltaXtva = tva.getX() - tre.getX();
double deltaYtva = tva.getY() - tre.getY();
Point one;
Point two;
Point three;
double xt = ((deltaX/2))+ett.getX();
double yt = ((deltaY/2))+ett.getY() +RandomUtilities.randFunc(dev);
one = new Point(xt,yt);
xt = (deltaXtre/2)+ett.getX();
yt = (deltaYtre/2)+ett.getY() +RandomUtilities.randFunc(dev);
two = new Point(xt,yt);
xt = ((deltaXtva/2))+tre.getX();
yt = ((deltaYtva/2))+tre.getY() +RandomUtilities.randFunc(dev);
three = new Point(xt,yt);
fractalLine(turtle,order-1,one,tva,three,dev/2);
fractalLine(turtle,order-1,ett,one,two,dev/2);
fractalLine(turtle,order-1,two,three,tre,dev/2);
fractalLine(turtle,order-1,one,two,three,dev/2);
}
}
Thanks in Advance
Victor
You can define a triangle by 3 points(vertexes). So the vertexes a, b, and c will form a triangle. The combinations ab,ac and bc will be the edges. So the algorithm goes:
First start with the three vertexes a,b and c
Get the midpoints of the 3 edges p1,p2 and p3 and get the 4 sets of vertexes for the 4 smaller triangles. i.e. (a,p1,p2),(b,p1,p3),(c,p2,p3) and (p1,p2,p3)
Recursively find the sub-triangles of the 4 triangles till the depth is reached.
So as a rough guide, the code goes
findTriangles(Vertexes[] triangle, int currentDepth) {
//Depth is reached.
if(currentDepth == depth) {
store(triangle);
return;
}
Vertexes[] first = getFirstTriangle(triangle);
Vertexes[] second = getSecondTriangle(triangle);
Vertexes[] third = getThirdTriangle(triangle);;
Vertexes[] fourth = getFourthTriangle(triangle)
findTriangles(first, currentDepth+1);
findTriangles(second, currentDepth+1);
findTriangles(third, currentDepth+1);
findTriangles(fourth, currentDepth+1);
}
You have to store the relevant triangles in a Data structure.
You compute the midpoints of any vertex again and again in the different paths of your recursion. As long as you do not change them by random, you get the same midpoint for every path so there's no problem.
But of course, if you modify the midpoints by random, you'll end with two different midpoints in two different paths of recursion.
You could modify your algorithm in a way that you not only pass the 3 corners of the triangle along, but also the modified midpoints of each vertex. Or you keep them in a separate list or map or something and only compute them one time and look them up otherwise.

What does this mysterious Color Method do? What does it return?

Maybe I've had too much coffee, maybe I've been working too long, regardless, I'm at a loss as to what this method does, or rather, why and how it does it, could anyone shed some light upon me? What is the nextColor?
public Color nextColor() {
int max = 0, min = 1000000000, cr = 0, cg = 0, cb = 0;
for (int r = 0; r < 256; r += 4) {
for (int g = 0; g < 256; g += 4) {
for (int b = 0; b < 256; b += 4) {
if (r + g + b < 256 || r + g + b > 512) {
continue;
}
min = 1000000000;
for (Color c : colorTable) {
int dred = r - c.getRed();
int dgreen = g - c.getGreen();
int dblue = b - c.getBlue();
int dif = dred * dred + dgreen * dgreen + dblue * dblue;
if (min > dif) {
min = dif;
}
}
if (max < min) {
max = min;
cr = r;
cg = g;
cb = b;
}
}
}
}
return new Color(cr, cg, cb, 0x90);
}
UPDATE
Thanks for the responses everyone. Looking at the context of the method within the program it is clear that their intent was indeed to return a new Color that is "furthest away" from the set of existing Colors.
Thanks Sparr for posing the followup to this question, I will definitely rewrite the above with your advice in mind.
I am not very well versed in the RGB color scale. Knowing the intention of the above method is to retrieve a "complimentary?" color to the existing set of colors, will the solution provided in 1 actually be complimentary in the sense of how we perceive the color? Is there a simpler way to choose a color that will compliment the set, or does the numerical analysis of the RGB components actually yield the appropriate color?
It seems like you have colortable which is a storing a list of colors.
Then you have this strangely hardcoded colorspace of
Colors that have component which are a
multiple of 4 and are "not too bright"
but not "too dark either".
This function seems to be giving you the color in the latter which "contrasts" the best with your color table.
When I say contrast , this is defined by choosing the color that is as far as possible from the color table using the 2-norm.
Given a global array of Color objects named colorTable, this function will find the color from the following colorspace that is the closest* to each one in that array, and then the one of those colors that was farthest away:
Red, Green, Blue components a multiple of 4
Red+Green+Blue between 256 and 512
*:"closest" is defined as the lowest sum of squares of difference for each color component.
As Paul determined, this seems like a plausible, if insanely inefficiently implemented, naive approach to finding a single color that provides a high contrast with the contents of colorTable. The same result could be found with a single pass through colorTable and a bit more math, instead of some 5 million passes through colorTable, and there are much better ways to find a different color that provides a much higher average contrast.
Consider the case where the pseudo-solid defined by the points in the colorTable has a large "hollow" in its interior, such that nextColor selects the point in the center of that hollow as the nextColor. Depending on what you know about the colorTable, this case could be exceedingly rare. If it is predicted to be rare enough, and you are willing to accept a less than optimal (assuming we take nextColor's output to be optimal) solution in those cases, then a significant optimization presents itself.
In all cases except the above-described one, the color selected by nextColor will be somewhere on the surface of the minimal convex hull enclosing all of the points in the 1/64-dense colorspace defined by your loops. Generating the list of points on that surface is slightly more computationally complex than the simple loops that generate the list of all the points, but it would reduce your search space by about a factor of 25.
In the vast majority of cases, the result of that simplified search will be a point on one of the corners of that convex hull. Considering only those reduces your search space to a trivial list (24 candidates, if my mental geometry serves me well) that could simply be stored ahead of time.
If the nextColor selected from those is "too close" to your colorTable, then you could fall back on running the original type of search in hopes of finding the sort of "hollow" mentioned above. The density of that search could be adapted based on how close the first pass got, and narrowed down from there. That is, if the super fast search finds a nextColor 8 units away from its nearest neighbor in colorTable, then to do better than that you would have to find a hollow at least 16 units across within the colorTable. Run the original search with a step of 8 and store any candidates more than 4 units distant (the hollow is not likely to be aligned with your search grid), then center a radius-12 search of higher density on each of those candidates.
It occurs to me that the 1/64-dense nature (all the multiples of 4) of your search space was probably instituted by the original author for the purpose of speeding up the search in the first place. Given these improvements, you do away with that compromise.
All of this presumes that you want to stick with improvements on this naive method of finding a contrasting color. There are certainly better ways, given equal or more (which colors in colorTable are the most prevalent in your usage? what colors appear more contrast-y to the human eye?) information.
It's trying to get you another color for
a) false-color coding a data set.
b) drawing another line on the graph.

Categories

Resources