I'm trying to make a rotating ellipse to the center, and I'm having trouble as to how it isn't moving besides being in the draw and having an offset in x and y changing. I mapped the initial x and y position of the ellipses at first and try and calculated its distance from the center I was wondering if any of you could help, it'll be much appreciated. ^^ (beginner here) [IDE processing]
I dunno what's wrong with the revolution from polar to cartesian coord.
Heart heart = new Heart();
int h = 500;
int w = 500;
void setup(){
size(500,500);
heart = new Heart();
heart.init();
//heart.display();
}
void draw(){
background(0);
heart.rotate();
}
class Heart {
float[][] pos;
float[] dist;
int hold;
float xOff;
float yOff;
Heart(){
hold = 10;
dist = new float[hold];
pos = new float[hold][hold];
}
void init(){
for(int i = 0; i < hold; i++){
for(int j = 0; j < hold; j++){
pos[i][j] = random(0,h);
}
}
}
/*void display(){
for (int k = 0; k < hold; k++){
fill(0,10,255,50);
ellipse(pos[k][0],pos[0][k],15,15);
stroke(255);
line(w/2,h/2,pos[k][0],pos[0][k]);
}
}*/
void rotate(){
float[] r = new float[hold];
int theta;
for(int k = 0; k < hold; k++){
r[k] = dist(w/2,h/2,pos[k][0],pos[0][k]);
for(theta = 0; theta <= TWO_PI; theta++){
xOff = r[k] * cos(theta);
yOff = r[k] * sin(theta);
stroke(255);
println(pos[k][0] + xOff);
ellipse(pos[k][0] + xOff,pos[0][k] + yOff,15,15);
}
}
}
}
I expect the ellipses to revolve, no error.
Use a global variable angle of type float. Increment the angle in every frame and pass it to the method Heart.rotate():
float angle = 0.0;
void draw(){
background(0);
heart.rotate(angle);
angle += 0.01;
}
The method Heart.rotate() hast to draw the the ellipse with one certain angle i every frame, rather than all the possible angles in a loop in each frame.
class Heart {
// ...
void rotate(float theta){
float[] r = new float[hold];
for(int k = 0; k < hold; k++){
r[k] = dist(w/2,h/2,pos[k][0],pos[0][k]);
xOff = r[k] * cos(theta);
yOff = r[k] * sin(theta);
stroke(255);
println(pos[k][0] + xOff);
ellipse(pos[k][0] + xOff,pos[0][k] + yOff,15,15);
}
}
}
Note, the display is updated once after global draw() has been executed. There is no update of the display in the inner loop of the method Heart.rotate().
My perceptron doesn't find the right y-intercept even though I added a bias. The slope is correct. This is my second try coding a perceptron from scratch and I got the same error twice.
The perceptron evaluates if a point on a canvas is higher or lower than the interception line. The inputs are the x-coordinate, y-coordinate and 1 for the bias.
Perceptron class:
class Perceptron
{
float[] weights;
Perceptron(int layerSize)
{
weights = new float[layerSize];
for (int i = 0; i < layerSize; i++)
{
weights[i] = random(-1.0,1.0);
}
}
float Evaluate(float[] input)
{
float sum = 0;
for (int i = 0; i < weights.length; i++)
{
sum += weights[i] * input[i];
}
return sum;
}
float Learn(float[] input, int expected)
{
float guess = Evaluate(input);
float error = expected - guess;
for (int i = 0; i < weights.length; i++)
{
weights[i] += error * input[i] * 0.01;
}
return guess;
}
}
This is the testing code:
PVector[] points;
float m = 1; // y = mx+q (in canvas space)
float q = 0; //
Perceptron brain;
void setup()
{
size(600,600);
points = new PVector[100];
for (int i = 0; i < points.length; i++)
{
points[i] = new PVector(random(0,width),random(0,height));
}
brain = new Perceptron(3);
}
void draw()
{
background(255);
DrawGraph();
DrawPoints();
//noLoop();
}
void DrawPoints()
{
for (int i = 0; i < points.length; i++)
{
float[] input = new float[] {points[i].x / width, points[i].y / height, 1};
int expected = ((m * points[i].x + q) < points[i].y) ? 1 : 0; // is point above line
float output = brain.Learn(input, expected);
fill(sign(output) * 255);
stroke(expected*255,100,100);
strokeWeight(3);
ellipse(points[i].x, points[i].y, 20, 20);
}
}
int sign(float x)
{
return x >= 0 ? 1 : 0;
}
void DrawGraph()
{
float y1 = 0 * m + q;
float y2 = width * m + q;
stroke(255,100,100);
strokeWeight(3);
line(0,y1,width,y2);
}
I found the problem
float guess = Evaluate(input);
float error = expected - guess;
should be
float guess = sign(Evaluate(input));
float error = expected - guess;
The output was never exactly one ore zero even if the answer would be correct. Because of this even the correct points gave a small error that stopped the perceptron from finding the right answer. By calculating the sign of the answer first the error is 0 if the answer is correct.
I have a matrix double[][] with arbitrary dimensions but bigger than 300 (maybe in one or maybe on both dimensions). I want to scale it to double[300][300].
My main approach is to interpolate the matrix and bring it up to double[600][600] and then take four elements and find their average, i.e. the elements 0,0, 0,1, 1,0 and 1,1 will be the 0,0 of the final 300x300 matrix.
I have found the interpolation library in JAVA but I cannot figure out how to use it. Can anyone provide some examples or info?
The library is: http://docs.oracle.com/cd/E17802_01/products/products/java-media/jai/forDevelopers/jai-apidocs/javax/media/jai/Interpolation.html
Thnx.
What about writing a simple method that maps source cells to destination, then averages out?
public static boolean matrixReduce(double[][] dst, double[][] src) {
double dstMaxX = dst.length - 1, dstMaxY = dst[0].length - 1;
double srcMaxX = src.length - 1, srcMaxY = src[0].length - 1;
int count[][] = new int[dst.length][dst[0].length];
for (int x = 0; x < src.length; x++) {
for (int y = 0; y < src[0].length; y++) {
int xx = (int) Math.round((double) x * dstMaxX / srcMaxX);
int yy = (int) Math.round((double) y * dstMaxY / srcMaxY);
dst[xx][yy] += src[x][y];
count[xx][yy]++;
}
}
for (int x = 0; x < dst.length; x++) {
for (int y = 0; y < dst[0].length; y++) {
dst[x][y] /= count[x][y];
}
}
return true;
}
I got a 3x3 matrix in OpenCV format (org.opencv.core.Mat) that I want to copy into android.graphics.Matrix. Any idea how?
[EDIT]
Here's the final version as inspired by #elmiguelao. The source matrix is from OpenCV and the destination matrix is from Android.
static void transformMatrix(Mat src, Matrix dst) {
int columns = src.cols();
int rows = src.rows();
float[] values = new float[columns * rows];
int index = 0;
for (int x = 0; x < columns; x++)
for (int y = 0; y < rows; y++) {
double[] value = src.get(x, y);
values[index] = (float) value[0];
index++;
}
dst.setValues(values);
}
Something along these lines:
cv.Mat opencv_matrix;
Matrix android_matrix;
if (opencv_matrix.isContiguous()) {
android_matrix.setValues(cv.MatOfFloat(opencv_matrix.ptr()).toArray());
} else {
float[] opencv_matrix_values = new float[9];
// Undocumented .get(row, col, float[]), but seems to be bulk-copy.
opencv_matrix.get(0, 0, opencv_matrix_values);
android_matrix.setValues(opencv_matrix_values);
}
This function also respects the Mat's data type (float or double):
static Matrix cvMat2Matrix(Mat source) {
if (source == null || source.empty()) {
return null;
}
float[] matrixValuesF = new float[source.cols()*source.rows()];
if (CvType.depth(source.type()) == CvType.CV_32F) {
source.get(0,0, matrixValuesF);
} else {
double[] matrixValuesD = new double[matrixValuesF.length];
source.get(0, 0, matrixValuesD);
//will throw an java.lang.UnsupportedOperationException if type is not CvType.CV_64F
for (int i=0; i<matrixValuesD.length; i++) {
matrixValuesF[i] = (float) matrixValuesD[i];
}
}
Matrix result = new Matrix();
result.setValues(matrixValuesF);
return result;
}
i want to blur a buffered image in Java, without a special "blurring api".
Now i found this page and write this code:
public int[][] filter(int[][] matrix)
{
float[] blurmatrix = {
0.111f, 0.111f, 0.111f,
0.111f, 0.111f, 0.111f,
0.111f, 0.111f, 0.111f,
};
int[][] returnMatrix = new int[matrix.length][matrix[0].length];
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix[0].length; j++) {
returnMatrix[i][j]=matrix[i][j];
for(int k=0;k<blurmatrix.length;k++)
{
float blurPixel= blurmatrix[k];
int newPixel= (int) (returnMatrix[i][j]*blurPixel);
returnMatrix[i][j]= newPixel;
}
}
}
return returnMatrix;
}
the int matrix come from this method:
public int[][] getMatrixOfImage(BufferedImage bufferedImage) {
int width = bufferedImage.getWidth(null);
int height = bufferedImage.getHeight(null);
int[][] retrunMatrix = new int[width][height];
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
retrunMatrix[i][j] = bufferedImage.getRGB(i, j);
}
}
return retrunMatrix;
}
But it want work, what is wrong?
Thanks!
UPDATE
The proble is, the result is not what it should be.
When i have this blurmatrix:
float[] blurmatrix = {
10.111f, 0.111f, 0.111f,
0.111f, 50.111f, 0.111f,
0.111f, 0.111f, 10.111f,
};
i get this result :http://img854.imageshack.us/img854/541/2qw7.png
when i have this blurmatrix:
float[] blurmatrix = {
0.111f, 0.111f, 0.111f,
0.111f, 0.111f, 0.111f,
0.111f, 0.111f, 0.111f,
};
the picture is deleted.
Your matrix seems wrong. Generally the sum of all numbers in these matrixes is 1. Your matrix will probably make everything white.
Edit: I see you corrected the matrix.
Edit2: There's a lot of wrong with your code.
Your method getMatrixOfImage returns an array of 32-bit RGB or RGBA values.
You multiply these values with filter values. That is incorrect. This sort of multiplication makes values of one color spill into other colors. You need to multiply R, G and B values separately.
Your innermost loop (the one with k index) is completely wrong. You take a pixel and multiply it by 0.111 nine times. What you needed to do is take 3x3 pixel square around each pixel, multiply each pixel by the filter value for that pixel, sum them up and then save them as this pixel.
Another wrong thing is that you're filling pixels from source image into destination one in one by one fashion which won't work since you need adjacent pixels which are not filled yet.
Your function needs to create a new image array with same size as source image.
Then it needs to iterate through the destination image and obtain each pixel such as this:
Take pixels in source image in positions [x-1][y-1], [x][y-1], [x+1][y-1], [x-1][y], [x][y], [x+1][y], [x-1][y+1], [x][y+1], [x+1][y+1] and multiply them with filter values (in your case it's 0.111 for each of these), sum them up and save the pixel into the new image.
Take note that you need to do this for each color separately (use binary AND operation and bitshifting to obtain each color value). You also need to consider edges, where [x-1][y-1] pixel might be non-existent. You can substitute value 0 or use [x][y] for those.
Your not doing the convolution correctly. You need to set the pixel at (i,j) to be the average of all it's surrounding pixels. That is what the 1/9 = 0.111f are meant for. A convolution operation would average all the neighbours of a pixel into a single value and set that value of the central pixel:
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix[i].length; j++) {
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i-1, j-1)));
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i-1, j)));
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i-1, j+1)));
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i , j-1)));
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i , j)));
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i , j+1)));
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i+1, j-1)));
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i+1, j)));
returnMatrix[i][j] += (int)( 0.111f * get(returnMatrix, i+1, j+1)));
}
}
int get(int[][]m, int i, int j) {
if(i >= 0 && i < m.length && j >= 0 && j <= m[i].length) {
return m[i][j];
}
return 0;
}
This algorithm has implemented convloution on image matrix of bufferedImage using kernel.Different kernel can be used for different purpose.
Hope it helps
import java.awt.Color;
import javax.imageio.ImageIO;
import java.io.*;
import java.awt.image.BufferedImage;
class psp {
public static void main(String[] args) {
try
{
File input=new File("abc.jpg");
File output=new File("output1.jpg");
BufferedImage picture1 = ImageIO.read(input); // original
BufferedImage picture2= new BufferedImage(picture1.getWidth(), picture1.getHeight(),BufferedImage.TYPE_INT_RGB);
int width = picture1.getWidth();
int height = picture1.getHeight();
//int kernel[][]={{-1,-1,-1},{-1,8,-1},{-1,-1,-1}};//for edge detection
float kernel[][]={{0.111f,0.111f,0.111f},{0.111f,0.111f,0.111f},{0.111f,0.111f,0.111f}};//for blur
//float kernel[][]={{0.111f,0.111f,0.111f,0.111f,0.111f},{0.111f,0.111f,0.111f,0.111f,0.111f},{0.111f,0.111f,0.111f,0.111f,0.111f},{0.111f,0.111f,0.111f,0.111f,0.111f},{0.111f,0.111f,0.111f,0.111f,0.111f}};
//int kernel[][]={{0,-1,0},{-1,5,-1},{0,-1,0}};//for sharpen
for (int y = 0; y < height ; y++) {//loops for images
for (int x = 0; x < width ; x++) {
int r=0,g=0,b=0;//for kernel
for(int i=0;i<3;i++){
for(int j=0;j<3;j++){
try
{
Color c=new Color(picture1.getRGB(x+i-1,y+j-1));//x+i-1,y+j-1 will do exact what we want
r+=c.getRed()*kernel[i][j];
b+=c.getBlue()*kernel[i][j];
g+=c.getGreen()*kernel[i][j];
}catch(Exception e){}
}
}
r = Math.min(255, Math.max(0, r));
g = Math.min(255, Math.max(0, g));
b = Math.min(255, Math.max(0, b));
Color color = new Color(r, g, b);
picture2.setRGB(x, y, color.getRGB());
}
}
ImageIO.write(picture2,"jpg",output);
}catch(Exception e){
System.out.println(e);
}}}