I have the two methods which should write pixel values of the absolute difference of a template and the same size patch from the original picture into the pixel where both patch and template coordinates are 0,0.
Here are my 2 methods.
private void normalizeAndDraw(double biggest, double[] temporaryPixels, int[] dstPixels ){
double normalize = 255 / biggest;
for (int c = 0; c < temporaryPixels.length; c++) {
int value = (int) (temporaryPixels[c] * normalize);
dstPixels[c] = 0xFF000000 | (value << 16) | (value << 8) | value;
}
}
private void getAbsolutePicture(int srcPixels[], int srcWidth, int srcHeight, int dstPixels[], int dstWidth, int dstHeight, int templatePixels[], int tmpHeight, int tmpWidth) {
double temporaryPixels[] = new double[dstHeight * dstWidth];
double biggest = 0;
double sumR = 0;
for (int j = 0; j < tmpHeight; j++) {
for (int i = 0; i < tmpWidth; i++) {
int posTmp = j * tmpWidth + i;
sumR += templatePixels[posTmp] & 0xFF;
}
}
for (int y = 0; y < dstHeight; y++) {
for (int x = 0; x < dstWidth; x++) {
double sumI = 0;
for (int j = 0; j < tmpHeight; j++) {
for (int i = 0; i < tmpWidth; i++) {
int pos = (y + j) * dstWidth + (x + i);
sumI += srcPixels[pos] & 0xFF;
}
}
double absDifference = Math.abs(sumI - sumR);
biggest = Math.max(absDifference, biggest);
temporaryPixels[y * dstWidth + x] = absDifference;
}
}
normalizeAndDraw(biggest, temporaryPixels, dstPixels);
}
They get called like this.
getAbsolutePicture(srcPixels, srcWidth, srcHeight, dstPixels, dstWidth, dstHeight, templatePixels, templateWidth, templateHeight);
If values are written into the dstPixels array they will automatically be displayed.
Unfortunately instead of the correct solution which looks like this
http://i.stack.imgur.com/cqlD3.png
I get a result which looks like this
http://i.stack.imgur.com/2Cjhz.png
I am pretty sure that my error lies in the calculation of sumR and sumI but I just cant figure it out?
What exactly is wrong in my code?
My code was actually okay. The biggest problem was that when I called getAbsolutePicture() I mixed up tmpWdith and tmpHeight.
Related
I have implemented a convolution filter in java. I did this a while back in ap cs but now I actually need it for something, so I re implemented it to make sure I still know how to do it. Unfortunately I lost my working copy so I can't compare current code to my previous working code. I am pretty sure I am implementing the algorithm correctly, but the code is still not working properly. Can an experienced programmer please explain what I am doing wrong.
Here is the Convolution class:
import java.awt.*;
import java.util.Arrays;
public class ConvolutionFilter {
private int[][] image;
private int[][] weights;
private double[][] doubleWeights;
private int[][] convolved;
public ConvolutionFilter(int[][] image, int[][] weights) {
this.image = image;
this.weights = weights;
convolve();
}
public void convolve() {
int sum;
int[][] convolved = new int[image.length][image[0].length];
for (int r = 0; r < convolved.length - weights.length - 1; r++) {
for (int c = 0; c < convolved[r].length - weights.length - 1; c++) {
sum = 0;
for (int i = 0; i < weights.length; i++) {
for (int j = 0; j < weights[i].length; j++) {
sum += image[r + i][c + j] * weights[i][j];
}
}
convolved[r][c] = sum / weight();
}
}
this.convolved = convolved;
}
public int numWeights() {
return weights.length * weights[0].length;
}
public int weight() {
int sum = 0;
for (int r = 0; r < weights.length; r++) {
for (int c = 0; c < weights[r].length; c++) {
sum += weights[r][c];
}
}
if (sum == 0) return 1; else return sum;
}
public int[][] getConvolved() {
return convolved;
}
}
Any help is appreciated!
To adapt this to RGB, the arithmetic should be done per channel instead of over the packed representation, for example (not tested)
public void convolve() {
int[][] convolved = new int[image.length][image[0].length];
double invScale = 1.0 / weight();
for (int r = 0; r < convolved.length - weights.length - 1; r++) {
for (int c = 0; c < convolved[r].length - weights.length - 1; c++) {
int rsum = 0, gsum = 0, bsum = 0;
for (int i = 0; i < weights.length; i++) {
for (int j = 0; j < weights[i].length; j++) {
int pixel = image[r + i][c + j];
int w = weights[i][j];
rsum += ((pixel >> 16) & 0xFF) * w;
gsum += ((pixel >> 8) & 0xFF) * w;
bsum += (pixel & 0xFF) * w;
}
}
rsum = (int)(rsum * invScale);
gsum = (int)(gsum * invScale);
bsum = (int)(bsum * invScale);
convolved[r][c] = bsum | (gsum << 8) | (rsum << 16) | (0xFF << 24);
}
}
this.convolved = convolved;
}
I use Conrad open source and I want to compute the histogram of an image. For that first, I found the min and max of my image values and range=max-min and then the lower-limit and upper-bound exactly according to this link https://www2.southeastern.edu/Academics/Faculty/dgurney/Math241/StatTopics/HistGen.htm
Now my problem is that how can the program divide the range according to the bin-Width and how can I assign each image values to each bin-width(classes).
public static int[] computeHistogram2D(Grid2D grid, int bins, int width, int heigth) {
float val = 0;
float max = -Float.MAX_VALUE;
float min = Float.MAX_VALUE;
for (int i = 0; i < width; i++) {
for (int j = 0 ; j < heigth; j++) {
val = grid.getAtIndex(i, j);
if (val > max) {
max = val;
}
if (val < min) {
min = val;
}
}
}
int[] histo = new int[bins];
double[] histof = new double[bins];
float range = max - min;
float lowerLimit = range/(float)bins;
float upperBound = range / (float) (bins -1);
//float binWidth =( upperBound - lowerLimit)/bins;
for (int i = 0; i < width; i++) {
for (int j = 0 ; j < heigth; j++) {
val = grid.getAtIndex(i, j);
int b = (int) ((val - min) / upperBound);
histo[b]++;
histof[b]++;
System.out.println(" intervals " + b );
}
}
System.out.println("maxValgrid2D: " +max + " minValGrid2D: " + min + " binWidth "+ upperBound);
VisualizationUtil.createPlot(histof, "Histogram2D", "intensity", "count").show();
return histo;
}
This is what I got as a result,which seems to be wrong because the max-value of my pic is 21.12
What I expect to get
I was wondering how to make this opencv c++ code in Java
uchar *ptr = eye.ptr<uchar>(y);
I have been looking around and I think I can use the uchar as a byte... but I have no idea what the code to get the .ptr in java
Heres my code so far
private Rect getEyeball(Mat eye, MatOfRect circles) {
int[] sums = new int[circles.toArray().length];
for (int y = 0; y < eye.rows(); y++) {
// OpenCV method uchar *ptr = eye.ptr<uchar>(y); Goes here
}
int smallestSum = 9999999;
int smallestSumIndex = -1;
for (int i = 0; i < circles.toArray().length; i++) {
if (sums[i] < smallestSum) {
smallestSum = sums[i];
smallestSumIndex = i;
}
}
return circles.toArray()[smallestSumIndex];
}
The full C++ code is
cv::Vec3f getEyeball(cv::Mat &eye, std::vector<cv::Vec3f> &circles)
{
std::vector<int> sums(circles.size(), 0);
for (int y = 0; y < eye.rows; y++)
{
uchar *ptr = eye.ptr<uchar>(y);
for (int x = 0; x < eye.cols; x++)
{
int value = static_cast<int>(*ptr);
for (int i = 0; i < circles.size(); i++)
{
cv::Point center((int)std::round(circles[i][0]), (int)std::round(circles[i][1]));
int radius = (int)std::round(circles[i][2]);
if (std::pow(x - center.x, 2) + std::pow(y - center.y, 2) < std::pow(radius, 2))
{
sums[i] += value;
}
}
++ptr;
}
}
int smallestSum = 9999999;
int smallestSumIndex = -1;
for (int i = 0; i < circles.size(); i++)
{
if (sums[i] < smallestSum)
{
smallestSum = sums[i];
smallestSumIndex = i;
}
}
return circles[smallestSumIndex];
}
Distilling down your C++:
for (int y = 0; y < eye.rows; y++)
{
uchar *ptr = eye.ptr<uchar>(y);
for (int x = 0; x < eye.cols; x++)
{
int value = static_cast<int>(*ptr);
// A loop not using ptr.
++ptr;
}
}
You're simply getting the pixel value at (x,y) from eye.
So, just use one of the overloads of Mat.get.
int[] values = new int[eye.channels()];
for (int y = 0; y < eye.rows(); y++) {
for (int x = 0; x < eye.cols(); x++) {
eye.get(x, y, values);
int value = values[0];
// A loop not using ptr.
}
}
Note that using get(int, int, int[]) rather than get(int, int) here means that you avoid allocating a new array for each iteration, which will make things a heck of a lot faster.
I have this problem, i have a NxM matrix and i want to multiply it by a 3x3 matrix just as a convolutional matrix multiplication
example in this link
This are the code of the matrix:
int width = img.getWidth();
int height = img.getHeight();
int matrix[][] = new int[width][height];
int edgeMatrix[][] = {
{-1,-1,-1},
{-1,8,-1},
{-1,-1,-1}
};
This is the code of the cycle:
for (int x = 0; x < width; x++) {
w = 0;
holderX = x;
for (w = 0; w < 3; w++) {
v = 0;
if (w > 0)
x++;
for (v = 0; v < 3; v++) {
sum = sum + matrix[v][x] * edgeMatrix[v][w];
if (w == 2 && v == 2)
x = holderX;
}
}
}
This cycle already multiply the first "row" of 3 of the matrix.
T tried in a different ways to achieve this but i just cant get that when the matrix reach the end of the width automatically the N value increase one and then starts over again and in the same time the value still working on the internal matrix multiplication.
Thanks for the help.
You dont need holderX, but need one more loop.
int width = img.getWidth();
int height = img.getHeight();
int input[][] = img.getPixels(); // or whatever api you use
int output[][] = new int[height][width];
int kernel[][] = {
{-1,-1,-1},
{-1,8,-1},
{-1,-1,-1}
};
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int accumulator = 0;
for (int v = 0; v < 3; v++) {
for (int w = 0; w < 3; w++) {
int sy = y + v - 1;
int sx = x + w - 1;
if (sy >= 0 && sy < height && sx >= 0 && sx < width)) {
accumulator += input[sy][sx] * kernel[v][w];
}
}
}
output[sy][sx] = accumulator;
}
}
I have a matrix which represents an image and I need to cycle over each pixel and for each one of those I have to compute the sum of all its neighbors, ie the pixels that belong to a window of radius rad centered on the pixel.
I came up with three alternatives:
The simplest way, the one that recomputes the window for each pixel
The more optimized way that uses a queue to store the sums of the window columns and cycling through the columns of the matrix updates this queue by adding a new element and removing the oldes
The even more optimized way that does not need to recompute the queue for each row but incrementally adjusts a previously saved one
I implemented them in c++ using a queue for the second method and a combination of deques for the third (I need to iterate through their elements without destructing them) and scored their times to see if there was an actual improvement. it appears that the third method is indeed faster.
Then I tried to port the code to Java (and I must admit that I'm not very comfortable with it). I used ArrayDeque for the second method and LinkedLists for the third resulting in the third being inefficient in time.
Here is the simplest method in C++ (I'm not posting the java version since it is almost identical):
void normalWindowing(int mat[][MAX], int cols, int rows, int rad){
int i, j;
int h = 0;
for (i = 0; i < rows; ++i)
{
for (j = 0; j < cols; j++)
{
h = 0;
for (int ry =- rad; ry <= rad; ry++)
{
int y = i + ry;
if (y >= 0 && y < rows)
{
for (int rx =- rad; rx <= rad; rx++)
{
int x = j + rx;
if (x >= 0 && x < cols)
{
h += mat[y][x];
}
}
}
}
}
}
}
Here is the second method (the one optimized through columns) in C++:
void opt1Windowing(int mat[][MAX], int cols, int rows, int rad){
int i, j, h, y, col;
queue<int>* q = NULL;
for (i = 0; i < rows; ++i)
{
if (q != NULL)
delete(q);
q = new queue<int>();
h = 0;
for (int rx = 0; rx <= rad; rx++)
{
if (rx < cols)
{
int mem = 0;
for (int ry =- rad; ry <= rad; ry++)
{
y = i + ry;
if (y >= 0 && y < rows)
{
mem += mat[y][rx];
}
}
q->push(mem);
h += mem;
}
}
for (j = 1; j < cols; j++)
{
col = j + rad;
if (j - rad > 0)
{
h -= q->front();
q->pop();
}
if (j + rad < cols)
{
int mem = 0;
for (int ry =- rad; ry <= rad; ry++)
{
y = i + ry;
if (y >= 0 && y < rows)
{
mem += mat[y][col];
}
}
q->push(mem);
h += mem;
}
}
}
}
And here is the Java version:
public static void opt1Windowing(int [][] mat, int rad){
int i, j = 0, h, y, col;
int cols = mat[0].length;
int rows = mat.length;
ArrayDeque<Integer> q = null;
for (i = 0; i < rows; ++i)
{
q = new ArrayDeque<Integer>();
h = 0;
for (int rx = 0; rx <= rad; rx++)
{
if (rx < cols)
{
int mem = 0;
for (int ry =- rad; ry <= rad; ry++)
{
y = i + ry;
if (y >= 0 && y < rows)
{
mem += mat[y][rx];
}
}
q.addLast(mem);
h += mem;
}
}
j = 0;
for (j = 1; j < cols; j++)
{
col = j + rad;
if (j - rad > 0)
{
h -= q.peekFirst();
q.pop();
}
if (j + rad < cols)
{
int mem = 0;
for (int ry =- rad; ry <= rad; ry++)
{
y = i + ry;
if (y >= 0 && y < rows)
{
mem += mat[y][col];
}
}
q.addLast(mem);
h += mem;
}
}
}
}
I recognize this post will be a wall of text. Here is the third method in C++:
void opt2Windowing(int mat[][MAX], int cols, int rows, int rad){
int i = 0;
int j = 0;
int h = 0;
int hh = 0;
deque< deque<int> *> * M = new deque< deque<int> *>();
for (int ry = 0; ry <= rad; ry++)
{
if (ry < rows)
{
deque<int> * q = new deque<int>();
M->push_back(q);
for (int rx = 0; rx <= rad; rx++)
{
if (rx < cols)
{
int val = mat[ry][rx];
q->push_back(val);
h += val;
}
}
}
}
deque<int> * C = new deque<int>(M->front()->size());
deque<int> * Q = new deque<int>(M->front()->size());
deque<int> * R = new deque<int>(M->size());
deque< deque<int> *>::iterator mit;
deque< deque<int> *>::iterator mstart = M->begin();
deque< deque<int> *>::iterator mend = M->end();
deque<int>::iterator rit;
deque<int>::iterator rstart = R->begin();
deque<int>::iterator rend = R->end();
deque<int>::iterator cit;
deque<int>::iterator cstart = C->begin();
deque<int>::iterator cend = C->end();
for (mit = mstart, rit = rstart; mit != mend, rit != rend; ++mit, ++rit)
{
deque<int>::iterator pit;
deque<int>::iterator pstart = (* mit)->begin();
deque<int>::iterator pend = (* mit)->end();
for(cit = cstart, pit = pstart; cit != cend && pit != pend; ++cit, ++pit)
{
(* cit) += (* pit);
(* rit) += (* pit);
}
}
for (i = 0; i < rows; ++i)
{
j = 0;
if (i - rad > 0)
{
deque<int>::iterator cit;
deque<int>::iterator cstart = C->begin();
deque<int>::iterator cend = C->end();
deque<int>::iterator pit;
deque<int>::iterator pstart = (M->front())->begin();
deque<int>::iterator pend = (M->front())->end();
for(cit = cstart, pit = pstart; cit != cend; ++cit, ++pit)
{
(* cit) -= (* pit);
}
deque<int> * k = M->front();
M->pop_front();
delete k;
h -= R->front();
R->pop_front();
}
int row = i + rad;
if (row < rows && i > 0)
{
deque<int> * newQ = new deque<int>();
M->push_back(newQ);
deque<int>::iterator cit;
deque<int>::iterator cstart = C->begin();
deque<int>::iterator cend = C->end();
int rx;
int tot = 0;
for (rx = 0, cit = cstart; rx <= rad; rx++, ++cit)
{
if (rx < cols)
{
int val = mat[row][rx];
newQ->push_back(val);
(* cit) += val;
tot += val;
}
}
R->push_back(tot);
h += tot;
}
hh = h;
copy(C->begin(), C->end(), Q->begin());
for (j = 1; j < cols; j++)
{
int col = j + rad;
if (j - rad > 0)
{
hh -= Q->front();
Q->pop_front();
}
if (j + rad < cols)
{
int val = 0;
for (int ry =- rad; ry <= rad; ry++)
{
int y = i + ry;
if (y >= 0 && y < rows)
{
val += mat[y][col];
}
}
hh += val;
Q->push_back(val);
}
}
}
}
And finally its Java version:
public static void opt2Windowing(int [][] mat, int rad){
int cols = mat[0].length;
int rows = mat.length;
int i = 0;
int j = 0;
int h = 0;
int hh = 0;
LinkedList<LinkedList<Integer>> M = new LinkedList<LinkedList<Integer>>();
for (int ry = 0; ry <= rad; ry++)
{
if (ry < rows)
{
LinkedList<Integer> q = new LinkedList<Integer>();
M.addLast(q);
for (int rx = 0; rx <= rad; rx++)
{
if (rx < cols)
{
int val = mat[ry][rx];
q.addLast(val);
h += val;
}
}
}
}
int firstSize = M.getFirst().size();
int mSize = M.size();
LinkedList<Integer> C = new LinkedList<Integer>();
LinkedList<Integer> Q = null;
LinkedList<Integer> R = new LinkedList<Integer>();
for (int k = 0; k < firstSize; k++)
{
C.add(0);
}
for (int k = 0; k < mSize; k++)
{
R.add(0);
}
ListIterator<LinkedList<Integer>> mit;
ListIterator<Integer> rit;
ListIterator<Integer> cit;
ListIterator<Integer> pit;
for (mit = M.listIterator(), rit = R.listIterator(); mit.hasNext();)
{
Integer r = rit.next();
int rsum = 0;
for (cit = C.listIterator(), pit = (mit.next()).listIterator();
cit.hasNext();)
{
Integer c = cit.next();
Integer p = pit.next();
rsum += p;
cit.set(c + p);
}
rit.set(r + rsum);
}
for (i = 0; i < rows; ++i)
{
j = 0;
if (i - rad > 0)
{
for(cit = C.listIterator(), pit = M.getFirst().listIterator();
cit.hasNext();)
{
Integer c = cit.next();
Integer p = pit.next();
cit.set(c - p);
}
M.removeFirst();
h -= R.getFirst();
R.removeFirst();
}
int row = i + rad;
if (row < rows && i > 0)
{
LinkedList<Integer> newQ = new LinkedList<Integer>();
M.addLast(newQ);
int rx;
int tot = 0;
for (rx = 0, cit = C.listIterator(); rx <= rad; rx++)
{
if (rx < cols)
{
Integer c = cit.next();
int val = mat[row][rx];
newQ.addLast(val);
cit.set(c + val);
tot += val;
}
}
R.addLast(tot);
h += tot;
}
hh = h;
Q = new LinkedList<Integer>();
Q.addAll(C);
for (j = 1; j < cols; j++)
{
int col = j + rad;
if (j - rad > 0)
{
hh -= Q.getFirst();
Q.pop();
}
if (j + rad < cols)
{
int val = 0;
for (int ry =- rad; ry <= rad; ry++)
{
int y = i + ry;
if (y >= 0 && y < rows)
{
val += mat[y][col];
}
}
hh += val;
Q.addLast(val);
}
}
}
}
I guess that most is due to the poor choice of the LinkedList in Java and to the lack of an efficient (not shallow) copy method between two LinkedList.
How can I improve the third Java method? Am I doing some conceptual error? As always, any criticisms is welcome.
UPDATE Even if it does not solve the issue, using ArrayLists, as being suggested, instead of LinkedList improves the third method. The second one performs still better (but when the number of rows and columns of the matrix is lower than 300 and the window radius is small the first unoptimized method is the fastest in Java)
UPDATE2 Which tool can I use to profile my code and have a richer understanding of which instruction takes the most time? I'm on Mac OS X and using NetBeans Profiler just shows me that the three methods end up with different times (It seems I'm not able to scope within each method)
UPDATE3 I'm scoring the times in java using System.nanoTime() can this lead to inaccurate estimates?:
long start, end;
start = System.nanoTime();
simpleWindowing(mat, rad);
end = System.nanoTime();
System.out.println(end-start);
start = System.nanoTime();
opt1Windowing(mat, rad);
end = System.nanoTime();
System.out.println(end-start);
start = System.nanoTime();
opt2Windowing(mat, rad);
end = System.nanoTime();
System.out.println(end-start);
LinkedList is a very bad choice for a list with where you do random access.
For each get(int) scans the list until the request index is reached.
get(1) is quite fast, but get(100) is 100 times slower and get(1000) is a 1000 times slower than get(1)
You should change that to use ArrayList instead and initialize the ArrayList with the expected size to avoid unnecessary resizing of the internal Array.
Edit
While my coments about get() and LinkedList are correct, they do not apply in this context. I somehow overlooked that there is no random access to the list.
Use an int[] instead of a List.
Lists store Objects requiring a conversion from int to Integer and back.
I indeed implemented two optimized versions for that routine:
the first one, as User216237 suggested, makes use of an array of int as a queue to cache the summed column values as the algorithm scans the image by columns
the other one implements a Summed Area Table in order to compute every rectangualar area of sums by accessing this table just four times (It is independent from the window radius).
One technique can be arbitrary faster than the other according to the specific domain in which it is implemented. In mine, the summed area table had to be computed several times and so it resulted slower than the first method for a radius value lesser than 20 pixel.
About timing your code: System.nanoTime() is ok (i don't think you can get better since it's using OS timers as far as i know), but:
don't try to measure too short of a task, then the accuracy isnt so good. I think anything less than a few milliseconds will give you potential trouble. References, anyone?
measure multiple times, and take the median of the measurements. Outside effects can severely slow the execution, making your estimation useless. Taking the mean doesn't work too well because it is sensitive to such outliers.
many JVMs have JIT compiler, you may want to execute your code multiple times before you measure, so the compiler doesn't kick in somewhere in the middle of your measurement and half of your measurements are suddenly 10x faster than the rest. Better measure after your VM has "warmed up".