I'm having trouble getting my method to work. The method should mirror any image I choose on its diagonal to produce a mirror effect, but at the moment it just produces the same image unedited and I don't what I'm doing wrong. Any help would be greatly appreciated. Thank you.
public Picture mirrorImageDiagonal() {
int size = this.getWidth();
Pixel rightPixel = null;
Pixel leftTargetPixel = null;
Pixel rightTargetPixel = null;
Picture target = new Picture(size, size);
for (double x = 0; x < size; x ++) {
for (double y = 0; y <= x; y ++) {
int yIndex = Math.min((int) y, this.getHeight() - 1);
int xIndex = Math.min((int) x, this.getWidth() - 1);
leftTargetPixel = target.getPixel(yIndex, xIndex);
rightTargetPixel = target.getPixel(xIndex, yIndex);
rightPixel = this.getPixel(xIndex, yIndex);
rightTargetPixel.setColor(rightPixel.getColor());
leftTargetPixel.setColor(rightPixel.getColor());
}
}
return target;
}
I am assuming that you are trying to complete the challenge for A6 in the picture lab packet. I just completed this for school, but if you are not, I hope this still helps you.
public void mirrorDiagonal()
{
Pixel[][] pixels = this.getPixels2D();
Pixel pixel1 = null;
Pixel pixel2 = null;
int width = pixels[0].length;
for (int row = 0; row < pixels.length; row++)
{
for (int col = 0; col < width; col++)
{
if (col < pixels.length)
{
pixel1 = pixels[row][col];
pixel2 = pixels[col][row];
pixel1.setColor(pixel2.getColor());
}
}
}
}
Related
I want to pixelize a Image with JavaFx.
My problem is that I only have one written pixel in the end, so that it works for just one time.
i tried a
Here is my code:
Image img = imgView.getImage();
PixelReader pixelReader = img.getPixelReader();
WritableImage wImage = new WritableImage(
(int) img.getWidth(),
(int) img.getHeight());
PixelWriter pixelWriter = wImage.getPixelWriter();
for (int y = 1; y < img.getHeight(); y += 3) {
for (int x = 1; x < img.getWidth(); x += 3) {
Color px = pixelReader.getColor(x, y);
float red = (float) px.getRed();
float green = (float) px.getGreen();
float blue = (float) px.getBlue();
Color all = new Color(red / 3, green / 3, blue / 3, 1);
for (int u = 0; u <= 3; u++) {
for (int i = 0; i <= 3; i++) {
pixelWriter.setColor(u, i, all);
}
}
}
}
Just check the part where you set the color:
for (int u = 0; u <= 3; u++) {
for (int i = 0; i <= 3; i++) {
pixelWriter.setColor(u, i, all);
}
}
As you can see you always set the color of pixel at (0,0) - (3,3).
You need to use
pixelWriter.setColor(x + u, y + i, all);
However, you need to be sure that you won't try to set color of some pixels outside the image. Check the boundaries of loops by x, y, u and i.
I have to create an android app for image registration. I have created a 2D array for each image after cropping images and i made an fft using jtrasform, then i tryed to create a cross correlation matrix. serching in this matrix for the max value coordinates i expected to have the X e Y values for shifting my image but this values are wrong and i can't find the error.
public void Registration(Bitmap image,Bitmap image2) {
int square,x,y;
int Min2,Min1,Min;
Min1=min(image.getHeight(),image2.getHeight());
Min2=min(image.getWidth(),image2.getWidth());
if(Min1<Min2)
Min=Min1;
else
Min=Min2;
if (Min>1024)
square =1024;
else{
if (Min>512)
square =512;
else{
if (Min<256)
square=128;
else
square=256;
}}
Bitmap crop=Bitmap.createBitmap(image, 0,0,square, square);
Bitmap crop2=Bitmap.createBitmap(image2, 0,0,square, square);*/
float[][] array = new float[square-1][square-1];
float[][] array2 = new float[square-1][square-1];
float[][] array3 = new float[square-1][square-1];
for (x = 0; x < square-1; x++) {
int p = crop.getPixel(x,x);
int p1=crop2.getPixel(x,x);
array[x][x] = (Color.red(p) + Color.green(p) + Color.blue(p)) / 3;
array2[x][x] = (Color.red(p1) + Color.green(p1) + Color.blue(p1)) / 3;
}
for (y = square-1; y < (2*square)-1; y++) {
for (x = 0; x < square-1; x++){
array[x][y] = 0;
array2[x][y] = 0;
}}
FloatFFT_2D a = new FloatFFT_2D(square,square);
FloatFFT_2D b = new FloatFFT_2D(square,square);
a.complexForward(array);
b.complexForward(array2);
for (y = 0; y < (2*square)-1; y++) {
for (x = 0; x < square-1; x++){
if(y>=square){
array2[x][y] =-array2[x][y];}
array3[x][y]=array[x][y]*array2[x][y];}}
FloatFFT_2D c = new FloatFFT_2D(square-1,square-1);
c.complexInverse(array3,false);
Max(array3,(square),(2*square));
I made two methods for a class called Picture, the name is self explanatory. The getAverageColor() method gets the average color of all the pixels in a certain area of the image specified by the parameters passed in. In the pixelate() method, it uses getAverageColor() to pixelate the image. The whole thing works, however it takes upwards of 2 minutes to pixelate a single image. It takes even longer if the pixelSize parameter is made smaller and the image is larger. So I was wondering if there is a better algorithm for doing this by manipulating the pixels.
/**
* NOTE: The smaller the pixelSize the longer the pixelation process takes
*/
public void pixelate(int pixelSize)
{
Pixel[][] pixels = this.getPixels2D();
int blockSize = pixelSize;
Color averageColor = null;
for(int row = 0; row < pixels.length; row += blockSize)
{
for (int col = 0; col < pixels[row].length; col += blockSize)
{
if (!((col + blockSize > pixels[0].length) || (row + blockSize > pixels.length)))
{
averageColor = getAverageColor(row, col, row+blockSize, col+blockSize);
}
for (int row_2 = row; (row_2 < row + blockSize) && (row_2 < pixels.length); row_2++)
{
for (int col_2 = col; (col_2 < col + blockSize) && (col_2 < pixels[0].length); col_2++)
{
pixels[row_2][col_2].setColor(averageColor);
}
}
}
}
}
public Color getAverageColor(int startRow, int startCol, int endRow, int endCol)
{
Pixel[][] pixels = this.getPixels2D();
Color averageColor = null;
int totalPixels = (endRow - startRow)*(endCol - startCol);
int totalRed = 0;
int averageRed = 0;
int totalGreen = 0;
int averageGreen = 0;
int totalBlue = 0;
int averageBlue = 0;
for (int row = startRow; row < endRow; row++)
{
for (int col = startCol; col < endCol; col++)
{
totalRed += pixels[row][col].getRed();
totalGreen += pixels[row][col].getGreen();
totalBlue += pixels[row][col].getBlue();
}
}
averageRed = totalRed / totalPixels;
averageGreen = totalGreen / totalPixels;
averageBlue = totalBlue / totalPixels;
averageColor = new Color(averageRed, averageGreen, averageBlue);
return averageColor;
}
I'm creating an Image filter program and I want to convert a coloured picture to a grayscale picture with the help of an array matrix.
This is what I have currently:
import java.awt.Color;
import se.lth.cs.ptdc.images.ImageFilter;
public class GrayScaleFilter extends ImageFilter {
public GrayScaleFilter(String name){
super(name);
}
public Color[][] apply(Color[][] inPixels, double paramValue){
int height = inPixels.length;
int width = inPixels[0].length;
Color[][] outPixels = new Color[height][width];
for (int i = 0; i < 256; i++) {
grayLevels[i] = new Color(i, i, i);
}
for(int i = 0; i < height; i++){
for(int j = 0; j < width; j++){
Color pixel = inPixels[i][j];
outPixels[i][j] = grayLevels[index];
}
}
return outPixels;
}
}
It looks like I'm supposed to use this formula: ((R+G+B)/3)
I want to create an array matrix like this:
Color[] grayLevels = new Color[256];
// creates the color (0,0,0) and puts it in grayLevels[0],
// (1,1,1) in grayLevels[1], ..., (255,255,255) in grayLevels[255]
This is the class I'm refering too when I want to use grascale:
public abstract Color[][] apply(Color[][] inPixels, double paramValue);
protected short[][] computeIntensity(Color[][] pixels) {
int height = pixels.length;
int width = pixels[0].length;
short[][] intensity = new short[height][width];
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
Color c = pixels[i][j];
intensity[i][j] = (short) ((c.getRed() + c.getGreen() + c
.getBlue()) / 3);
}
}
return intensity;
}
Any feedback on how I can achieve this? Instead of using outPixels[i][j] = new Color(intensity, intensity, intensity);
Build the grayLevels array this way:
for (int i = 0; i < 256; i++) {
grayLevels[i] = new Color(i, i, i);
}
Then, when you need a certain color, just retrieve it as grayLevels[index].
I'm using Processing to divide a large image into a series of smaller, rectangular nodes.
Processing stores the color value for the pixels of a PImage in a pixels array, which I am accessing to break up the image into smaller parts. For some reason, I am getting this output, when my intent was for the entire image to be displayed when the nodes are arranged in draw().
Here is my main class:
ArrayList node = new ArrayList();
PImage grid;
PVector nodeDimensions = new PVector(210, 185);
PVector gridDimensions = new PVector(2549, 3300);
String name = "gridscan.jpeg";
void setup() {
size(500, 500);
grid = loadImage(name);
grid.loadPixels();
fillPixels();
noLoop();
}
void fillPixels() {
int nodeNum = 0;
for (int startX = 0; startX < 2549 - nodeDimensions.x; startX += nodeDimensions.x) {
for (int startY = 0; startY < 3300 - nodeDimensions.y; startY += nodeDimensions.y) {
node.add(new Node());
sendPixels(new PVector(startX, startY), nodeNum);
nodeNum++;
}
}
}
void sendPixels(PVector start, int nodeNum) {
for (int x = int(start.x); x < start.x + nodeDimensions.x; x++) {
for (int y = int(start.y); y < start.x + nodeDimensions.y; y++) {
Node _node = (Node) node.get(node.size() - 1);
_node.fillPixel(new PVector(x, y), grid.pixels[int(y*gridDimensions.x+x)]);
}
}
}
void draw() {
drawNodes();
}
void drawNodes() {
int nodeNum = 0;
for (int x = 0; x < width; x += nodeDimensions.x) {
for (int y = 0; y < height; y += nodeDimensions.y) {
Node _node = (Node) node.get(nodeNum);
_node.drawMe(new PVector(x - (nodeDimensions.x/2), y - (nodeDimensions.y/2)));
nodeNum++;
}
}
}
And here is the Node class:
class Node {
color[] pixel;
Node() {
pixel = new color[int(nodeDimensions.x * nodeDimensions.y)];
}
void fillPixel(PVector pos, color pixelValue) {
if(int(pos.y * nodeDimensions.y + pos.x) < 38850) pixel[int(pos.y * nodeDimensions.y + pos.x)] = pixelValue;
}
void drawMe(PVector centerPos) {
pushMatrix();
translate(centerPos.x, centerPos.y);
for(int x = 0; x < nodeDimensions.x; x++) {
for(int y = 0; y < nodeDimensions.y; y++) {
stroke(getPixelColor(new PVector(x, y)));
point(x,y);
}
}
popMatrix();
}
color getPixelColor(PVector pos) {
return pixel[int(pos.y * nodeDimensions.x + pos.x)];
}
}
Hopefully my code makes sense. I suspect the issue is in the sendPixels() method of the main class.
I used this this page from the Processing reference as a guide for creating that function, and I'm not sure where my logic is wrong.
Any help would be appreciated, and please let me know if I can clarify something.
According to getPixelColor(), it seems that it uses rows.
So if you have a 5x5 square image then 2x2 would be 7.
To get the index you use this formula:
index = (y - 1) * width + x
Explained this way it's look pretty simple, doesn't it?
Alternatively, you may be able to use getSubimage() on the BufferedImage returned by the getImage method of PImage. There's a related example here.