I have coded a heightmap but it seems to lag the client. I just don't know how to increase the fps. I get about 3-6fps with the heightmap. Im using a quite large bmp for the heightmap, I think its 1024x1024. When i use a smaller on its fine, maybe im just not using the code effectively. Is there a better way to code this heightmap or did I just code it wrong. It is my first time I have worked on a heightmap. Thanks
public class HeightMap {
private final float xScale, yScale, zScale;
private float[][] heightMap;
private FloatBuffer vertices, normals, texCoords;
private IntBuffer indices;
private Vector3f[] verticesArray, normalsArray;
private int[] indicesArray;
private int width;
private int height;
public float getHeight(int x, int y) {
return heightMap[x][y] * yScale;
}
public HeightMap(String path, int resolution) {
heightMap = loadHeightmap("heightmap.bmp");
xScale = 1000f / resolution;
yScale = 8;
zScale = 1000f / resolution;
verticesArray = new Vector3f[width * height];
vertices = BufferUtils.createFloatBuffer(3 * width * height);
texCoords = BufferUtils.createFloatBuffer(2 * width * height);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
final int pos = height * x + y;
final Vector3f vertex = new Vector3f(xScale * x, yScale * heightMap[x][y], zScale * y);
verticesArray[pos] = vertex;
vertex.store(vertices);
texCoords.put(x / (float) width);
texCoords.put(y / (float) height);
}
}
vertices.flip();
texCoords.flip();
normalsArray = new Vector3f[height * width];
normals = BufferUtils.createFloatBuffer(3 * width * height);
final float xzScale = xScale;
for (int x = 0; x < width; ++x) {
for (int y = 0; y < height; ++y) {
final int nextX = x < width - 1 ? x + 1 : x;
final int prevX = x > 0 ? x - 1 : x;
float sx = heightMap[nextX][y] - heightMap[prevX][y];
if (x == 0 || x == width - 1) {
sx *= 2;
}
final int nextY = y < height - 1 ? y + 1 : y;
final int prevY = y > 0 ? y - 1 : y;
float sy = heightMap[x][nextY] - heightMap[x][prevY];
if (y == 0 || y == height - 1) {
sy *= 2;
}
final Vector3f normal = new Vector3f(-sx * yScale, 2 * xzScale, sy * yScale).normalise(null);
normalsArray[height * x + y] = normal;
normal.store(normals);
}
}
normals.flip();
indicesArray = new int[6 * (height - 1) * (width - 1)];
indices = BufferUtils.createIntBuffer(6 * (width - 1) * (height - 1));
for (int i = 0; i < width - 1; i++) {
for (int j = 0; j < height - 1; j++) {
int pos = (height - 1) * i + j;
indices.put(height * i + j);
indices.put(height * (i + 1) + j);
indices.put(height * (i + 1) + (j + 1));
indicesArray[6 * pos] = height * i + j;
indicesArray[6 * pos + 1] = height * (i + 1) + j;
indicesArray[6 * pos + 2] = height * (i + 1) + (j + 1);
indices.put(height * i + j);
indices.put(height * i + (j + 1));
indices.put(height * (i + 1) + (j + 1));
indicesArray[6 * pos + 3] = height * i + j;
indicesArray[6 * pos + 4] = height * i + (j + 1);
indicesArray[6 * pos + 5] = height * (i + 1) + (j + 1);
}
}
indices.flip();
}
private float[][] loadHeightmap(String fileName) {
try {
BufferedImage img = ImageIO.read(ResourceLoader.getResourceAsStream(fileName));
width = img.getWidth();
height = img.getHeight();
float[][] heightMap = new float[width][height];
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
heightMap[x][y] = 0xFF & img.getRGB(x, y);
}
}
return heightMap;
} catch (IOException e) {
System.out.println("Nincs meg a heightmap!");
return null;
}
}
public void render() {
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glNormalPointer(0, normals);
glVertexPointer(3, 0, vertices);
glTexCoordPointer(2, 0, texCoords);
glDrawElements(GL_TRIANGLE_STRIP, indices);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
}
}
Sorry to bring up an old topic, however i see a lot of people ask this:
Use a display list, instead of re-making the heightmap every time.
TheCodingUniverse has a good tutorial on how to do this.
Related
I'm trying to create a heightmap colored by face, instead of vertex. For example, this is what I currently have:
But this is what I want:
I read that I have to split each vertex into multiple vertices, then index each separately for the triangles. I also know that blender has a function like this for its models (split vertices, or something?), but I'm not sure what kind of algorithm I would follow for this. This would be the last resort, because multiplying the amount of vertices in the mesh for no reason other than color doesn't seem efficient.
I also discovered something called flatshading (using the flat qualifier on the pixel color in the shaders), but it seems to only draw squares instead of triangles. Is there a way to make it shade triangles?
For reference, this is my current heightmap generation code:
public class HeightMap extends GameModel {
private static final float START_X = -0.5f;
private static final float START_Z = -0.5f;
private static final float REFLECTANCE = .1f;
public HeightMap(float minY, float maxY, float persistence, int width, int height, float spikeness) {
super(createMesh(minY, maxY, persistence, width, height, spikeness), REFLECTANCE);
}
protected static Mesh createMesh(final float minY, final float maxY, final float persistence, final int width,
final int height, float spikeness) {
SimplexNoise noise = new SimplexNoise(128, persistence, 2);// Utils.getRandom().nextInt());
float xStep = Math.abs(START_X * 2) / (width - 1);
float zStep = Math.abs(START_Z * 2) / (height - 1);
List<Float> positions = new ArrayList<>();
List<Integer> indices = new ArrayList<>();
for (int z = 0; z < height; z++) {
for (int x = 0; x < width; x++) {
// scale from [-1, 1] to [minY, maxY]
float heightY = (float) ((noise.getNoise(x * xStep * spikeness, z * zStep * spikeness) + 1f) / 2
* (maxY - minY) + minY);
positions.add(START_X + x * xStep);
positions.add(heightY);
positions.add(START_Z + z * zStep);
// Create indices
if (x < width - 1 && z < height - 1) {
int leftTop = z * width + x;
int leftBottom = (z + 1) * width + x;
int rightBottom = (z + 1) * width + x + 1;
int rightTop = z * width + x + 1;
indices.add(leftTop);
indices.add(leftBottom);
indices.add(rightTop);
indices.add(rightTop);
indices.add(leftBottom);
indices.add(rightBottom);
}
}
}
float[] verticesArr = Utils.listToArray(positions);
Color c = new Color(147, 105, 59);
float[] colorArr = new float[positions.size()];
for (int i = 0; i < colorArr.length; i += 3) {
float brightness = (Utils.getRandom().nextFloat() - 0.5f) * 0.5f;
colorArr[i] = (float) c.getRed() / 255f + brightness;
colorArr[i + 1] = (float) c.getGreen() / 255f + brightness;
colorArr[i + 2] = (float) c.getBlue() / 255f + brightness;
}
int[] indicesArr = indices.stream().mapToInt((i) -> i).toArray();
float[] normalArr = calcNormals(verticesArr, width, height);
return new Mesh(verticesArr, colorArr, normalArr, indicesArr);
}
private static float[] calcNormals(float[] posArr, int width, int height) {
Vector3f v0 = new Vector3f();
Vector3f v1 = new Vector3f();
Vector3f v2 = new Vector3f();
Vector3f v3 = new Vector3f();
Vector3f v4 = new Vector3f();
Vector3f v12 = new Vector3f();
Vector3f v23 = new Vector3f();
Vector3f v34 = new Vector3f();
Vector3f v41 = new Vector3f();
List<Float> normals = new ArrayList<>();
Vector3f normal = new Vector3f();
for (int row = 0; row < height; row++) {
for (int col = 0; col < width; col++) {
if (row > 0 && row < height - 1 && col > 0 && col < width - 1) {
int i0 = row * width * 3 + col * 3;
v0.x = posArr[i0];
v0.y = posArr[i0 + 1];
v0.z = posArr[i0 + 2];
int i1 = row * width * 3 + (col - 1) * 3;
v1.x = posArr[i1];
v1.y = posArr[i1 + 1];
v1.z = posArr[i1 + 2];
v1 = v1.sub(v0);
int i2 = (row + 1) * width * 3 + col * 3;
v2.x = posArr[i2];
v2.y = posArr[i2 + 1];
v2.z = posArr[i2 + 2];
v2 = v2.sub(v0);
int i3 = (row) * width * 3 + (col + 1) * 3;
v3.x = posArr[i3];
v3.y = posArr[i3 + 1];
v3.z = posArr[i3 + 2];
v3 = v3.sub(v0);
int i4 = (row - 1) * width * 3 + col * 3;
v4.x = posArr[i4];
v4.y = posArr[i4 + 1];
v4.z = posArr[i4 + 2];
v4 = v4.sub(v0);
v1.cross(v2, v12);
v12.normalize();
v2.cross(v3, v23);
v23.normalize();
v3.cross(v4, v34);
v34.normalize();
v4.cross(v1, v41);
v41.normalize();
normal = v12.add(v23).add(v34).add(v41);
normal.normalize();
} else {
normal.x = 0;
normal.y = 1;
normal.z = 0;
}
normal.normalize();
normals.add(normal.x);
normals.add(normal.y);
normals.add(normal.z);
}
}
return Utils.listToArray(normals);
}
}
Edit
I've tried doing a couple things. I tried rearranging the indices with flat shading, but that didn't give me the look I wanted. I tried using a uniform vec3 colors and indexing it with gl_VertexID or gl_InstanceID (I'm not entirely sure the difference), but I couldn't get the arrays to compile.
Here is the github repo, by the way.
flat qualified fragment shader inputs will receive the same value for the same primitive. In your case, a triangle.
Of course, a triangle is composed of 3 vertices. And if the vertex shaders output 3 different values, how does the fragment shader know which value to get?
This comes down to what is called the "provoking vertex." When you render, you specify a particular primitive to use in your glDraw* call (GL_TRIANGLE_STRIP, GL_TRIANGLES, etc). These primitive types will generate a number of base primitives (ie: single triangle), based on how many vertices you provided.
When a base primitive is generated, one of the vertices in that base primitive is said to be the "provoking vertex". It is that vertex's data that is used for all flat parameters.
The reason you're seeing what you are seeing is because the two adjacent triangles just happen to be using the same provoking vertex. Your mesh is smooth, so two adjacent triangles share 2 vertices. Your mesh generation just so happens to be generating a mesh such that the provoking vertex for each triangle is shared between them. Which means that the two triangles will get the same flat value.
You will need to adjust your index list or otherwise alter your mesh generation so that this doesn't happen. Or you can just divide your mesh into individual triangles; that's probably much easier.
As a final resort, I just duplicated the vertices, and it seems to work. I haven't been able to profile it to see if it makes a big performance drop. I'd be open to any other suggestions!
for (int z = 0; z < height; z++) {
for (int x = 0; x < width; x++) {
// scale from [-1, 1] to [minY, maxY]
float heightY = (float) ((noise.getNoise(x * xStep * spikeness, z * zStep * spikeness) + 1f) / 2
* (maxY - minY) + minY);
positions.add(START_X + x * xStep);
positions.add(heightY);
positions.add(START_Z + z * zStep);
positions.add(START_X + x * xStep);
positions.add(heightY);
positions.add(START_Z + z * zStep);
}
}
for (int z = 0; z < height - 1; z++) {
for (int x = 0; x < width - 1; x++) {
int leftTop = z * width + x;
int leftBottom = (z + 1) * width + x;
int rightBottom = (z + 1) * width + x + 1;
int rightTop = z * width + x + 1;
indices.add(2 * leftTop);
indices.add(2 * leftBottom);
indices.add(2 * rightTop);
indices.add(2 * rightTop + 1);
indices.add(2 * leftBottom + 1);
indices.add(2 * rightBottom + 1);
}
}
I'm trying fill my Ellipse, the code works although I was wondering if there is a more efficient approach.
This will fill the first half of the circle given a percentage. And then it will fill the second half of the circle.
Let me know if you want to see any other functions. I was mainly concerned about filling it.
public void drawOrb() {
this.icon.drawSprite(this.xPos - this.icon.getWidth() / 2, 29 - this.icon.getHeight() / 2);
int radius = 19;
fillCircleAlpha(this.xPos, this.yPos, radius, 0, 35); // Draws a filled circle given a radius and alpha value.
Ellipse2D.Double circleToAvoid = drawCircle(this.xPos - radius, this.yPos - radius, radius * 2, 0, //The inner circle.
125);
Ellipse2D.Double circleToStart = drawCircle(this.xPos - (radius + 4), this.yPos - (radius + 4),
radius * 2 + 8, 0, 150); // The outer circle.
radius = 23;
int r2 = radius * radius;
int area = r2 << 2;
int rr = radius << 1;
for (int area2 = (int) (area * progress * 2.0), i = 0; i < area2; ++i) { //
int tx = i % rr;
int ty = i / rr;
if (!circleToAvoid.contains(circleToStart.getCenterX() + tx, circleToStart.getY() + ty) //If the index is inside the circle.
&& circleToStart.contains(circleToStart.getCenterX() + tx, circleToStart.getY() + ty)) {
drawPixelsWithOpacity(16777215, this.yPos + ty - radius, 1, 1, 75, this.xPos + tx); // Used to fill each pixel within the circle.
}
}
if (progress > 0.5) {
for (int area3 = (int) (area * (progress - 0.5) * 2.0), j = 0; j < area3; ++j) {
int tx2 = j % rr;
int ty2 = j / rr;
if (!circleToAvoid.contains(circleToStart.getCenterX() - tx2, circleToStart.getMaxY() - ty2)
&& circleToStart.contains(circleToStart.getCenterX() - tx2 - 1.0,
circleToStart.getMaxY() - ty2)) {
drawPixelsWithOpacity(16777215, (int) circleToStart.getMaxY() - ty2, 1, 1, 75,
(int) circleToStart.getCenterX() - tx2 - 1);
}
}
}
radius = 19;
drawCircle(this.xPos - (radius + 4), this.yPos - (radius + 4), radius * 2 + 8, 0, 150);
}
public static void drawPixelsWithOpacity(int color, int yPos, int pixelWidth, int pixelHeight, int opacityLevel, int xPos) {
if (xPos < topX) {
pixelWidth -= topX - xPos;
xPos = topX;
}
if (yPos < topY) {
pixelHeight -= topY - yPos;
yPos = topY;
}
if (xPos + pixelWidth > bottomX)
pixelWidth = bottomX - xPos;
if (yPos + pixelHeight > bottomY)
pixelHeight = bottomY - yPos;
int l1 = 256 - opacityLevel;
int i2 = (color >> 16 & 0xff) * opacityLevel;
int j2 = (color >> 8 & 0xff) * opacityLevel;
int k2 = (color & 0xff) * opacityLevel;
int k3 = width - pixelWidth;
int l3 = xPos + yPos * width;
if (l3 > pixels.length - 1) {
l3 = pixels.length - 1;
}
for (int i4 = 0; i4 < pixelHeight; i4++) {
for (int j4 = -pixelWidth; j4 < 0; j4++) {
int l2 = (pixels[l3] >> 16 & 0xff) * l1;
int i3 = (pixels[l3] >> 8 & 0xff) * l1;
int j3 = (pixels[l3] & 0xff) * l1;
int k4 = ((i2 + l2 >> 8) << 16) + ((j2 + i3 >> 8) << 8) + (k2 + j3 >> 8);
pixels[l3++] = k4;
}
l3 += k3;
}
}
public static Ellipse2D.Double drawCircle(final int x, final int y, final int diameter, final int color, final int opacity) {
final Ellipse2D.Double circle = new Ellipse2D.Double(x, y, diameter, diameter);
for (int i = 0; i < diameter; ++i) {
for (int i2 = 0; i2 < diameter; ++i2) {
if (circle.contains(i + x, i2 + y) && (!circle.contains(i + x - 1, i2 + y - 1) || !circle.contains(i + x + 1, i2 + y + 1) || !circle.contains(i + x - 1, i2 + y + 1) || !circle.contains(i + x + 1, i2 + y - 1))) {
drawPixelsWithOpacity(color, i2 + y, 1, 1, opacity, i + x);
}
}
}
return circle;
}
i have been working on a java game engine but my render keeps getting the unreachable code error.The error appears at the setPixels method.
public class Renderer {
private int width, height;
private byte[] pixels;
public Renderer(GameContainer gc){
width = gc.getWidth();
height = gc.getHeight();
pixels = ((DataBufferByte)gc.getWindow().getImage().getRaster().getDataBuffer()).getData();
}
public void setPixel(int x, int y, float a, float r, float g, float b){
if((x < 0 || x>= width || y < 0 || y>= height) || a == 0){
return;
int index = (x + y * width) * 4;
pixels[index] = (byte)((a * 255f) + 0.5f);
pixels[index + 1] = (byte)((b * 255f) + 0.5f);
pixels[index + 2] = (byte)((g * 255f) + 0.5f);
pixels[index + 3] = (byte)((r * 255f) + 0.5f);
}
}
public void clear(){
for(int x = 0; x < width; x++){
for(int y = 0; y < height; y++){
setPixel(x,y,1,0,1,1);
}
}
}
}
I think this is what you are trying to do?
Your if statement should not be enclosing all the statements in your function.
public void setPixel(int x, int y, float a, float r, float g, float b){
// Check for invalid values
if((x < 0 || x>= width || y < 0 || y>= height) || a == 0){
// Break out of function if invalid values detected
return;
}
// Update pixel
int index = (x + y * width) * 4;
pixels[index] = (byte)((a * 255f) + 0.5f);
pixels[index + 1] = (byte)((b * 255f) + 0.5f);
pixels[index + 2] = (byte)((g * 255f) + 0.5f);
pixels[index + 3] = (byte)((r * 255f) + 0.5f);
}
The return statement ends the execution of a method. In the event that the if statement is run in the code below, the method will hit the return and end before doing all the other stuff. You don't seem to need a return statement in setPixel since there isn't a need to end the method prematurely.
public void setPixel(int x, int y, float a, float r, float g, float b) {
if((x < 0 || x>= width || y < 0 || y>= height) || a == 0){
//return;
int index = (x + y * width) * 4;
pixels[index] = (byte)((a * 255f) + 0.5f);
pixels[index + 1] = (byte)((b * 255f) + 0.5f);
pixels[index + 2] = (byte)((g * 255f) + 0.5f);
pixels[index + 3] = (byte)((r * 255f) + 0.5f);
}
}
I am currently making a game in Java and I am trying to draw an image on my screen, but nothing show up ( only a black screen but no errors ) :(
Here is the code to import the image:
public static Bitmap loadBitmap(String fileName) {
try {
BufferedImage img = ImageIO.read(Art.class.getResource(fileName));
int w = img.getWidth();
int h = img.getHeight();
Bitmap result = new Bitmap(w, h);
img.getRGB(0, 0, w, h, result.pixels, 0, w);
for (int I = 0; I < result.pixels.length; i++) {
int in = result.pixels[i];
int col = (in & 0xf) >> 2;
if (in == 0xffff00ff) col = -1;
result.pixels[i] = col;
}
return result;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
And the Bitmap class:
public void draw(Bitmap bitmap, int xOffs, int yOffs)
{
for(int y = 0; y < bitmap.height; y++)
{
int yPix = y + yOffs;
if(yPix < 0 || yPix >= height) continue;
for(int x = 0; x < bitmap.width; x++)
{
int xPix = x + xOffs;
if(xPix < 0 || xPix >= width) continue;
int alpha = bitmap.pixels[x + y * bitmap.width];
if(alpha > 0)
pixels[xPix + yPix * width] = bitmap.pixels[x + y * bitmap.width];
}
}
}
And to draw all of this :
public void render(Game game)
{
for(int y = 0; y < height; y++)
{
float yd = ((y + 0.5f) - height / 2.0f) / height;
if(yd < 0) yd *= -1;
float z = 10 / yd;
for(int x = 0; x < width; x++)
{
float xd = (x - width / 2.0f) / height;
xd *= z;
int xx = (int) (xd) & 7;
int yy = (int) (z + game.time * 0.1f) & 7;
pixels[x + y * width] = Art.floors.pixels[xx + yy * 64];
}
}
}
I have no errors! I don't really understand.. is this a bug caused by alpha or something? Ho and my image.png is 64x64 made in paint.net
I am looking for a copy paste implementation of Canny Edge Detection in the processing language. I have zero idea about Image processing and very little clue about Processing, though I understand java pretty well.
Can some processing expert tell me if there is a way of implementing this http://www.tomgibara.com/computer-vision/CannyEdgeDetector.java in processing?
I think if you treat processing in lights of Java then some of the problems could be solved very easily. What it means is that you can use Java classes as such.
For the demo I am using the implementation which you have shared.
>>Original Image
>>Changed Image
>>Code
import java.awt.image.BufferedImage;
import java.util.Arrays;
PImage orig;
PImage changed;
void setup() {
orig = loadImage("c:/temp/image.png");
size(250, 166);
CannyEdgeDetector detector = new CannyEdgeDetector();
detector.setLowThreshold(0.5f);
detector.setHighThreshold(1f);
detector.setSourceImage((java.awt.image.BufferedImage)orig.getImage());
detector.process();
BufferedImage edges = detector.getEdgesImage();
changed = new PImage(edges);
noLoop();
}
void draw()
{
//image(orig, 0,0, width, height);
image(changed, 0,0, width, height);
}
// The code below is taken from "http://www.tomgibara.com/computer-vision/CannyEdgeDetector.java"
// I have stripped the comments for conciseness
public class CannyEdgeDetector {
// statics
private final static float GAUSSIAN_CUT_OFF = 0.005f;
private final static float MAGNITUDE_SCALE = 100F;
private final static float MAGNITUDE_LIMIT = 1000F;
private final static int MAGNITUDE_MAX = (int) (MAGNITUDE_SCALE * MAGNITUDE_LIMIT);
// fields
private int height;
private int width;
private int picsize;
private int[] data;
private int[] magnitude;
private BufferedImage sourceImage;
private BufferedImage edgesImage;
private float gaussianKernelRadius;
private float lowThreshold;
private float highThreshold;
private int gaussianKernelWidth;
private boolean contrastNormalized;
private float[] xConv;
private float[] yConv;
private float[] xGradient;
private float[] yGradient;
// constructors
/**
* Constructs a new detector with default parameters.
*/
public CannyEdgeDetector() {
lowThreshold = 2.5f;
highThreshold = 7.5f;
gaussianKernelRadius = 2f;
gaussianKernelWidth = 16;
contrastNormalized = false;
}
public BufferedImage getSourceImage() {
return sourceImage;
}
public void setSourceImage(BufferedImage image) {
sourceImage = image;
}
public BufferedImage getEdgesImage() {
return edgesImage;
}
public void setEdgesImage(BufferedImage edgesImage) {
this.edgesImage = edgesImage;
}
public float getLowThreshold() {
return lowThreshold;
}
public void setLowThreshold(float threshold) {
if (threshold < 0) throw new IllegalArgumentException();
lowThreshold = threshold;
}
public float getHighThreshold() {
return highThreshold;
}
public void setHighThreshold(float threshold) {
if (threshold < 0) throw new IllegalArgumentException();
highThreshold = threshold;
}
public int getGaussianKernelWidth() {
return gaussianKernelWidth;
}
public void setGaussianKernelWidth(int gaussianKernelWidth) {
if (gaussianKernelWidth < 2) throw new IllegalArgumentException();
this.gaussianKernelWidth = gaussianKernelWidth;
}
public float getGaussianKernelRadius() {
return gaussianKernelRadius;
}
public void setGaussianKernelRadius(float gaussianKernelRadius) {
if (gaussianKernelRadius < 0.1f) throw new IllegalArgumentException();
this.gaussianKernelRadius = gaussianKernelRadius;
}
public boolean isContrastNormalized() {
return contrastNormalized;
}
public void setContrastNormalized(boolean contrastNormalized) {
this.contrastNormalized = contrastNormalized;
}
// methods
public void process() {
width = sourceImage.getWidth();
height = sourceImage.getHeight();
picsize = width * height;
initArrays();
readLuminance();
if (contrastNormalized) normalizeContrast();
computeGradients(gaussianKernelRadius, gaussianKernelWidth);
int low = Math.round(lowThreshold * MAGNITUDE_SCALE);
int high = Math.round( highThreshold * MAGNITUDE_SCALE);
performHysteresis(low, high);
thresholdEdges();
writeEdges(data);
}
// private utility methods
private void initArrays() {
if (data == null || picsize != data.length) {
data = new int[picsize];
magnitude = new int[picsize];
xConv = new float[picsize];
yConv = new float[picsize];
xGradient = new float[picsize];
yGradient = new float[picsize];
}
}
private void computeGradients(float kernelRadius, int kernelWidth) {
//generate the gaussian convolution masks
float kernel[] = new float[kernelWidth];
float diffKernel[] = new float[kernelWidth];
int kwidth;
for (kwidth = 0; kwidth < kernelWidth; kwidth++) {
float g1 = gaussian(kwidth, kernelRadius);
if (g1 <= GAUSSIAN_CUT_OFF && kwidth >= 2) break;
float g2 = gaussian(kwidth - 0.5f, kernelRadius);
float g3 = gaussian(kwidth + 0.5f, kernelRadius);
kernel[kwidth] = (g1 + g2 + g3) / 3f / (2f * (float) Math.PI * kernelRadius * kernelRadius);
diffKernel[kwidth] = g3 - g2;
}
int initX = kwidth - 1;
int maxX = width - (kwidth - 1);
int initY = width * (kwidth - 1);
int maxY = width * (height - (kwidth - 1));
//perform convolution in x and y directions
for (int x = initX; x < maxX; x++) {
for (int y = initY; y < maxY; y += width) {
int index = x + y;
float sumX = data[index] * kernel[0];
float sumY = sumX;
int xOffset = 1;
int yOffset = width;
for(; xOffset < kwidth ;) {
sumY += kernel[xOffset] * (data[index - yOffset] + data[index + yOffset]);
sumX += kernel[xOffset] * (data[index - xOffset] + data[index + xOffset]);
yOffset += width;
xOffset++;
}
yConv[index] = sumY;
xConv[index] = sumX;
}
}
for (int x = initX; x < maxX; x++) {
for (int y = initY; y < maxY; y += width) {
float sum = 0f;
int index = x + y;
for (int i = 1; i < kwidth; i++)
sum += diffKernel[i] * (yConv[index - i] - yConv[index + i]);
xGradient[index] = sum;
}
}
for (int x = kwidth; x < width - kwidth; x++) {
for (int y = initY; y < maxY; y += width) {
float sum = 0.0f;
int index = x + y;
int yOffset = width;
for (int i = 1; i < kwidth; i++) {
sum += diffKernel[i] * (xConv[index - yOffset] - xConv[index + yOffset]);
yOffset += width;
}
yGradient[index] = sum;
}
}
initX = kwidth;
maxX = width - kwidth;
initY = width * kwidth;
maxY = width * (height - kwidth);
for (int x = initX; x < maxX; x++) {
for (int y = initY; y < maxY; y += width) {
int index = x + y;
int indexN = index - width;
int indexS = index + width;
int indexW = index - 1;
int indexE = index + 1;
int indexNW = indexN - 1;
int indexNE = indexN + 1;
int indexSW = indexS - 1;
int indexSE = indexS + 1;
float xGrad = xGradient[index];
float yGrad = yGradient[index];
float gradMag = hypot(xGrad, yGrad);
//perform non-maximal supression
float nMag = hypot(xGradient[indexN], yGradient[indexN]);
float sMag = hypot(xGradient[indexS], yGradient[indexS]);
float wMag = hypot(xGradient[indexW], yGradient[indexW]);
float eMag = hypot(xGradient[indexE], yGradient[indexE]);
float neMag = hypot(xGradient[indexNE], yGradient[indexNE]);
float seMag = hypot(xGradient[indexSE], yGradient[indexSE]);
float swMag = hypot(xGradient[indexSW], yGradient[indexSW]);
float nwMag = hypot(xGradient[indexNW], yGradient[indexNW]);
float tmp;
if (xGrad * yGrad <= (float) 0 /*(1)*/
? Math.abs(xGrad) >= Math.abs(yGrad) /*(2)*/
? (tmp = Math.abs(xGrad * gradMag)) >= Math.abs(yGrad * neMag - (xGrad + yGrad) * eMag) /*(3)*/
&& tmp > Math.abs(yGrad * swMag - (xGrad + yGrad) * wMag) /*(4)*/
: (tmp = Math.abs(yGrad * gradMag)) >= Math.abs(xGrad * neMag - (yGrad + xGrad) * nMag) /*(3)*/
&& tmp > Math.abs(xGrad * swMag - (yGrad + xGrad) * sMag) /*(4)*/
: Math.abs(xGrad) >= Math.abs(yGrad) /*(2)*/
? (tmp = Math.abs(xGrad * gradMag)) >= Math.abs(yGrad * seMag + (xGrad - yGrad) * eMag) /*(3)*/
&& tmp > Math.abs(yGrad * nwMag + (xGrad - yGrad) * wMag) /*(4)*/
: (tmp = Math.abs(yGrad * gradMag)) >= Math.abs(xGrad * seMag + (yGrad - xGrad) * sMag) /*(3)*/
&& tmp > Math.abs(xGrad * nwMag + (yGrad - xGrad) * nMag) /*(4)*/
) {
magnitude[index] = gradMag >= MAGNITUDE_LIMIT ? MAGNITUDE_MAX : (int) (MAGNITUDE_SCALE * gradMag);
//NOTE: The orientation of the edge is not employed by this
//implementation. It is a simple matter to compute it at
//this point as: Math.atan2(yGrad, xGrad);
} else {
magnitude[index] = 0;
}
}
}
}
private float hypot(float x, float y) {
return (float) Math.hypot(x, y);
}
private float gaussian(float x, float sigma) {
return (float) Math.exp(-(x * x) / (2f * sigma * sigma));
}
private void performHysteresis(int low, int high) {
Arrays.fill(data, 0);
int offset = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
if (data[offset] == 0 && magnitude[offset] >= high) {
follow(x, y, offset, low);
}
offset++;
}
}
}
private void follow(int x1, int y1, int i1, int threshold) {
int x0 = x1 == 0 ? x1 : x1 - 1;
int x2 = x1 == width - 1 ? x1 : x1 + 1;
int y0 = y1 == 0 ? y1 : y1 - 1;
int y2 = y1 == height -1 ? y1 : y1 + 1;
data[i1] = magnitude[i1];
for (int x = x0; x <= x2; x++) {
for (int y = y0; y <= y2; y++) {
int i2 = x + y * width;
if ((y != y1 || x != x1)
&& data[i2] == 0
&& magnitude[i2] >= threshold) {
follow(x, y, i2, threshold);
return;
}
}
}
}
private void thresholdEdges() {
for (int i = 0; i < picsize; i++) {
data[i] = data[i] > 0 ? -1 : 0xff000000;
}
}
private int luminance(float r, float g, float b) {
return Math.round(0.299f * r + 0.587f * g + 0.114f * b);
}
private void readLuminance() {
int type = sourceImage.getType();
if (type == BufferedImage.TYPE_INT_RGB || type == BufferedImage.TYPE_INT_ARGB) {
int[] pixels = (int[]) sourceImage.getData().getDataElements(0, 0, width, height, null);
for (int i = 0; i < picsize; i++) {
int p = pixels[i];
int r = (p & 0xff0000) >> 16;
int g = (p & 0xff00) >> 8;
int b = p & 0xff;
data[i] = luminance(r, g, b);
}
} else if (type == BufferedImage.TYPE_BYTE_GRAY) {
byte[] pixels = (byte[]) sourceImage.getData().getDataElements(0, 0, width, height, null);
for (int i = 0; i < picsize; i++) {
data[i] = (pixels[i] & 0xff);
}
} else if (type == BufferedImage.TYPE_USHORT_GRAY) {
short[] pixels = (short[]) sourceImage.getData().getDataElements(0, 0, width, height, null);
for (int i = 0; i < picsize; i++) {
data[i] = (pixels[i] & 0xffff) / 256;
}
} else if (type == BufferedImage.TYPE_3BYTE_BGR) {
byte[] pixels = (byte[]) sourceImage.getData().getDataElements(0, 0, width, height, null);
int offset = 0;
for (int i = 0; i < picsize; i++) {
int b = pixels[offset++] & 0xff;
int g = pixels[offset++] & 0xff;
int r = pixels[offset++] & 0xff;
data[i] = luminance(r, g, b);
}
} else {
throw new IllegalArgumentException("Unsupported image type: " + type);
}
}
private void normalizeContrast() {
int[] histogram = new int[256];
for (int i = 0; i < data.length; i++) {
histogram[data[i]]++;
}
int[] remap = new int[256];
int sum = 0;
int j = 0;
for (int i = 0; i < histogram.length; i++) {
sum += histogram[i];
int target = sum*255/picsize;
for (int k = j+1; k <=target; k++) {
remap[k] = i;
}
j = target;
}
for (int i = 0; i < data.length; i++) {
data[i] = remap[data[i]];
}
}
private void writeEdges(int pixels[]) {
if (edgesImage == null) {
edgesImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
}
edgesImage.getWritableTile(0, 0).setDataElements(0, 0, width, height, pixels);
}
}
I've been spending some time with the Gibara Canny implementation and I'm inclined to agree with Settembrini's comment above; further to this one needs to change the implementation of the Gaussian Kernel generation.
The Gibara Canny uses:
(g1 + g2 + g3) / 3f / (2f * (float) Math.PI * kernelRadius * kernelRadius)
The averaging across a pixel (+-0.5 pixels) in (g1 + g2 + g3) / 3f is great, but the correct variance calculation on the bottom half of the equation for single dimensions is:
(g1 + g2 + g3) / 3f / (Math.sqrt(2f * (float) Math.PI) * kernelRadius)
The standard deviation kernelRadius is sigma in the following equation:
Single direction gaussian
I'm assuming that Gibara is attempting to implement the two dimensional gaussian from the following equation: Two dimensional gaussian where the convolution is a direct product of each gaussian. Whilst this is probably possible and more concise, the following code will correctly convolve in two directions with the above variance calculation:
// First Convolution
for (int x = initX; x < maxX; x++) {
for (int y = initY; y < maxY; y += sourceImage.width) {
int index = x + y;
float sumX = data[index] * kernel[0];
int xOffset = 1;
int yOffset = sourceImage.width;
for(; xOffset < k ;) {;
sumX += kernel[xOffset] * (data[index - xOffset] + data[index + xOffset]);
yOffset += sourceImage.width;
xOffset++;
}
xConv[index] = sumX;
}
}
// Second Convolution
for (int x = initX; x < maxX; x++) {
for (int y = initY; y < maxY; y += sourceImage.width) {
int index = x + y;
float sumY = xConv[index] * kernel[0];
int xOffset = 1;
int yOffset = sourceImage.width;
for(; xOffset < k ;) {;
sumY += xConv[xOffset] * (xConv[index - xOffset] + xConv[index + xOffset]);
yOffset += sourceImage.width;
xOffset++;
}
yConv[index] = sumY;
}
}
NB the yConv[] is now the bidirectional convolution, so the following gradient Sobel calculations are as follows:
for (int x = initX; x < maxX; x++) {
for (int y = initY; y < maxY; y += sourceImage.width) {
float sum = 0f;
int index = x + y;
for (int i = 1; i < k; i++)
sum += diffKernel[i] * (yConv[index - i] - yConv[index + i]);
xGradient[index] = sum;
}
}
for (int x = k; x < sourceImage.width - k; x++) {
for (int y = initY; y < maxY; y += sourceImage.width) {
float sum = 0.0f;
int index = x + y;
int yOffset = sourceImage.width;
for (int i = 1; i < k; i++) {
sum += diffKernel[i] * (yConv[index - yOffset] - yConv[index + yOffset]);
yOffset += sourceImage.width;
}
yGradient[index] = sum;
}
}
Gibara's very neat implementation of non-maximum suppression requires that these gradients be calculated seperately, however if you want to output an image with these gradients one can sum them using either Euclidean or Manhattan distances, the Euclidean would look like so:
// Calculate the Euclidean distance between x & y gradients prior to suppression
int [] gradients = new int [picsize];
for (int i = 0; i < xGradient.length; i++) {
gradients[i] = Math.sqrt(Math.sq(xGradient[i]) + Math.sq(yGradient[i]));
}
Hope this helps, is all in order and apologies for my code! Critique most welcome
In addition to Favonius' answer, you might want to try Greg's OpenCV Processing library which you can now easily install via Sketch > Import Library... > Add Library... and select OpenCV for Processing
After you install the library, you can have a play with the FindEdges example:
import gab.opencv.*;
OpenCV opencv;
PImage src, canny, scharr, sobel;
void setup() {
src = loadImage("test.jpg");
size(src.width, src.height);
opencv = new OpenCV(this, src);
opencv.findCannyEdges(20,75);
canny = opencv.getSnapshot();
opencv.loadImage(src);
opencv.findScharrEdges(OpenCV.HORIZONTAL);
scharr = opencv.getSnapshot();
opencv.loadImage(src);
opencv.findSobelEdges(1,0);
sobel = opencv.getSnapshot();
}
void draw() {
pushMatrix();
scale(0.5);
image(src, 0, 0);
image(canny, src.width, 0);
image(scharr, 0, src.height);
image(sobel, src.width, src.height);
popMatrix();
text("Source", 10, 25);
text("Canny", src.width/2 + 10, 25);
text("Scharr", 10, src.height/2 + 25);
text("Sobel", src.width/2 + 10, src.height/2 + 25);
}
Just as I side note. I studied the Gibara Canny implementation some time ago and found some flaws. E.g. he separates the Gauss-Filtering in 1d filters in x and y direction (which is ok and efficient as such), but then he doesn't apply two passes of those filters (one after another) but just applies SobelX to the x-first-pass-Gauss and SobelY to the y-first-pass-Gauss, which of course leads to low quality gradients. Thus be careful just by copy-past such code.