I've been trying to write some code to generate a maze of any given size (made up of 32x32 tiles), but have run into an odd issue with the rendering code, in that only square mazes will be textured correctly.
I have a single .png file with all possible wall textures, and the floor texture, and dependent on the placement of walls around the currently selected wall during the texturing methods, the correct part of this .png should be selected to make walls blend together nicely. However, as mentioned before, this only works with square mazes (note, rendering is being done with vertex buffer objexts).
Here's the code for generating the maze (currently, it just randomly fills the space with walls, I plan to adjust this part to actually generate a solvable maze once I fix this issue):
public void run() { // The maze is part of a thread called World, which runs alongside a Renderer thread
mazeWidth = 20;
mazeHeight = 15;
maze = new byte[mazeWidth][mazeHeight];
}
public static void setUpMaze() {
for (int x = 0; x < mazeWidth; x++) {
for (int y = 0; y < mazeHeight; y++) {
// TODO Make proper maze generation code
maze[x][y] = (byte) mazeGenerator.nextInt(2);
}
}
}
The code to generate the vertices for the triangles to be drawn:
private float[] getMazeGrid() { // The 12 comes from the number of coordinates needed to define a square/single tile - 2 triangles, 6 vertices, 2 coordinates per vertex
float[] mazeGrid = new float[12 * World.mazeWidth * World.mazeHeight];
int yOffset = 0;
int xOffset = 0;
// The if statements adjust the minimum x/y coordinates for each tile, the for iterates through the tiles
for (int i = 0; i < World.mazeWidth * World.mazeHeight; i++) {
if (i % World.mazeWidth == 0) {
xOffset = 0;
} else {
xOffset += 32;
}
if (i % World.mazeWidth == 0 && i != 0) {
yOffset += 32;
}
// The code below defines one square of the grid
mazeGrid[12 * i + 0] = xOffset;
mazeGrid[12 * i + 1] = yOffset;
mazeGrid[12 * i + 2] = xOffset;
mazeGrid[12 * i + 3] = yOffset + 32;
mazeGrid[12 * i + 4] = xOffset + 32;
mazeGrid[12 * i + 5] = yOffset + 32;
mazeGrid[12 * i + 6] = xOffset + 32;
mazeGrid[12 * i + 7] = yOffset + 32;
mazeGrid[12 * i + 8] = xOffset + 32;
mazeGrid[12 * i + 9] = yOffset;
mazeGrid[12 * i + 10] = xOffset;
mazeGrid[12 * i + 11] = yOffset;
}
return mazeGrid;
}
And the code for determining which part of the texture should be used:
private float[] getTexCoords(int x, int y) {
texNumKey = 0;
if (World.maze[x][y] == 1) {
if (y > 0) {
if (World.maze[x][y - 1] == 1) texNumKey += 1;
}
if (x > 0) {
if (World.maze[x - 1][y] == 1) texNumKey += 2;
}
if (x < World.mazeWidth - 1) {
if (World.maze[x + 1][y] == 1) texNumKey += 4;
}
if (y < World.mazeHeight - 1) {
if (World.maze[x][y + 1] == 1) texNumKey += 8;
}
} else if (World.maze[x][y] == 0) {
texNumKey = 16;
}
return texMap.get(texNumKey);
}
NB: texMap is a HashMap which contains float arrays with texture coordinates to be used by the texture coordinate buffer, keyed with a number from 0-16. The code above iterates through the grid and checks the spaces around the current tile, and selects the correct texture coordinates for that wall type.
Finally, the vertex buffer object code - setting up the VBOs:
public void setUp() {
initialiseTextureMap();
vertexData = BufferUtils.createFloatBuffer(12 * World.mazeWidth * World.mazeHeight);
vertexData.put(getMazeGrid());
vertexData.flip();
textureData = BufferUtils.createFloatBuffer(12 * World.mazeWidth * World.mazeHeight);
for (int x = 0; x < World.mazeWidth; x++) {
for (int y = 0; y < World.mazeHeight; y++) {
textureData.put(getTexCoords(x, y));
}
}
textureData.flip();
vboVertexHandle = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vboVertexHandle);
glBufferData(GL_ARRAY_BUFFER, vertexData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
vboTextureCoordHandle = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vboTextureCoordHandle);
glBufferData(GL_ARRAY_BUFFER, textureData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
And drawing the VBOs:
public void draw() { // Draws the entity
glBindTexture(GL_TEXTURE_2D, loadTexture(this.textureKey).getTextureID());
glBindBuffer(GL_ARRAY_BUFFER, this.vboVertexHandle);
glVertexPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, this.vboTextureCoordHandle);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_TRIANGLES, 0, 12 * World.mazeWidth * World.mazeHeight);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
Most unexplained variable names should be fairly self explanatory, they are defined in the abstract superclass, or in the "Maze" class constructor.
So, for clarity, the above code works perfectly when I set the values of mazeWidth, and mazeHeight to the same value as one another, but if they are different, then textures are not assigned to tiles correctly - here is are examples of the code working, and failing, the top is a 10 x 10 maze, and the bottom 10 x 11: Maze Examples
EDIT: After switching the x and y for loops in the texture coordinate buffer setup:
Maze Example 2
If you need any other information/I've missed something important out, let me know.
Your problem is the combination of the for loops for x and y and the push method. You were looping down columns first, rather than across rows first. - the put assumes row first looping.
This will fix the problem quickly:
for (int y = 0; y < World.mazeHeight; y++) {
for (int x = 0; x < World.mazeWidth; x++) {
textureData.put(getTexCoords(x, y));
}
}
Your texture selection will have to be updated too, mirrored on the diagonal. For example, if you were selecting a texture with the path going South, it will now be drawn with the path going East.
in setup
try changing
for (int x = 0; x < World.mazeWidth; x++) {
for (int y = 0; y < World.mazeHeight; y++) {
textureData.put(getTexCoords(x, y));
}
}
to
for (int y = 0; y < World.mazeHeight; y++) {
for (int x = 0; x < World.mazeWidth; x++) {
textureData.put(getTexCoords(x, y));
}
}
Related
Suppose I have a two-dimensional grid of pixels (4 by 4 pixels) - and I have an image the size of my sketch that has been cut into 16 parts.
Now I load all 16 parts into an array. I want to map this array onto the 2D grid in turn, so that my overall image is put together again correctly. That is, top left image 0.png and bottom right image 16.png.
I just can't find the formula that allows me to do this. For example, I know that with x+y*width you can run trough all pixels – from top left to bottom right - so I tried that. Without *width it doesn't sit together properly - with x+y*width- ArrayIndexOutOfBoundsException (for sure).
So I thought I needed a 2D array - but with images[x][y] i get a NullPointerException. I attached you an image of what I am trying to create:
This is my code so far – without the 2D Array…
float pixelamount = 4;
float pixelsize;
PImage[] images = new PImage [16];
void setup() {
size(1080, 1080);
pixelsize = width/pixelamount;
for (int i = 0; i < images.length; i++) {
images[i] = loadImage(i + ".png");
}
imageMode(CENTER);
}
void draw() {
background(0);
pushMatrix();
translate(pixelsize/2, pixelsize/2);
for (int x = 0; x < pixelamount; x++) {
for (int y = 0; y < pixelamount; y++) {
pushMatrix();
translate(pixelsize*x, pixelsize*y);
image(images[x+y], 0, 0, pixelsize, pixelsize);
popMatrix();
}
}
popMatrix();
}
As I said – in the line image(images[x+y], 0, 0, pixelsize, pixelsize); I just do not get the math right. Do I need a 2D Array to solve this? Or something totally different?
This should be resolved without 2D array.
If the dimensions of the field are known 4x4, then possibly the loop should run from 0 to 4 something like this:
void draw() {
background(0);
pushMatrix();
translate(pixelsize/2, pixelsize/2);
for (int x = 0; x < 4; x++) {
for (int y = 0; y < 4; y++) {
pushMatrix();
translate(pixelsize * x, pixelsize * y);
image(images[4 * x + y], 0, 0, pixelsize, pixelsize);
popMatrix();
}
}
popMatrix();
}
Alex is correct.
Cyrill, you're on the right track but seem to get confused between 3 ways at looking at your data:
The images array is a 1D array (indices 0 to 15)
The for loop is nested therefore you need to convert 2D indices to 1D. You are right: x+y*width would give you the correct array index, but in this case width is not the full width of your sketch in pixels but the width of the grid (i.e. the number of columns in the 4x4 grid: 4)
You are getting a null pointer pointer because you're trying to access elements in a 1D array as if it's 2D.
Something like this should work:
float pixelamount = 4;
float pixelsize;
PImage[] images = new PImage [16];
void setup() {
size(1080, 1080);
pixelsize = width/pixelamount;
for (int i = 0; i < images.length; i++) {
images[i] = loadImage(i + ".png");
}
//imageMode(CENTER);
}
void draw() {
background(0);
pushMatrix();
translate(pixelsize/2, pixelsize/2);
for (int x = 0; x < pixelamount; x++) {
for (int y = 0; y < pixelamount; y++) {
pushMatrix();
translate(pixelsize*x, pixelsize*y);
image(images[x + y * pixelamount], 0, 0, pixelsize, pixelsize);
popMatrix();
}
}
popMatrix();
}
If you want to loop with a single for loop (instead of a nested for loop) which would match how you store your data you can use this formula to go from 1D index to 2D indices:
x = index % gridColumns
y = index / gridColumns
(Bare in mind these are integers (so in other languages (like Python/JS/etc.) you'd pay attention to the division operation))
Here's a basic example to illustrate this:
size(1080, 1080);
textAlign(CENTER, CENTER);
textFont(createFont("Courier New Bold", 12));
int pixelAmount = 4;
int pixelSize = width/pixelAmount;
int gridColumns = 4;
// iterate once
for(int i = 0; i < 16; i++){
// calculate 2D grid indices
int xIndex = i % gridColumns;
int yIndex = i / gridColumns;
// convert from index to pixel size
int x = xIndex * pixelSize;
int y = yIndex * pixelSize;
// render debug data
String debugText = "1D index:" + i +
"\n2D indices:[" + xIndex + "][" + yIndex + "]" +
"\nx, y pixels::" + x + "," + y;
fill(255);
rect(x, y, pixelSize, pixelSize);
fill(0);
text(debugText, x + pixelSize / 2, y + pixelSize / 2);
}
Here's the same example as the above using a 2D array and nested loops:
size(1080, 1080);
textAlign(CENTER, CENTER);
textFont(createFont("Courier New Bold", 12));
int pixelAmount = 4;
int pixelSize = width/pixelAmount;
int[][] grid = new int[pixelAmount][pixelAmount];
// mimick image loading (storing 1D index)
int index = 0;
for(int y = 0; y < pixelAmount; y++)
for(int x = 0; x < pixelAmount; x++)
grid[x][y] = index++;
// mimick reading 2D array data
for(int y = 0; y < pixelAmount; y++){
for(int x = 0; x < pixelAmount; x++){
int xPixels = x * pixelSize;
int yPixels = y * pixelSize;
// manually copute index
// index = x + y * pixelAmount;
// or retrieve stored index
index = grid[x][y];
String debugText = "1D index:" + index + ".png" +
"\n2D indices:[" + x + "][" + y + "]" +
"\nx, y pixels::" + xPixels + "," + yPixels;
fill(255);
rect(xPixels, yPixels, pixelSize, pixelSize);
fill(0);
text(debugText, xPixels + pixelSize / 2, yPixels + pixelSize / 2);
}
}
My answer is more for the sake of completeness: displaying both 1D/2D ways at looking at the data.
Based on the latest answer – this is my code – working perfectly!
float pixelamount = 4;
float pixelsize;
PImage[] images = new PImage [16];
void setup() {
size(1080, 1080);
pixelsize = width/pixelamount;
for (int i = 0; i < images.length; i++) {
images[i] = loadImage(i + ".png");
}
imageMode(CENTER);
}
void draw() {
background(0);
pushMatrix();
translate(pixelsize/2, pixelsize/2);
for (int x = 0; x < pixelamount; x++) {
for (int y = 0; y < pixelamount; y++) {
pushMatrix();
translate(pixelsize * x, pixelsize * y);
image(images[x + y * int(pixelamount)], 0, 0, pixelsize, pixelsize);
popMatrix();
}
}
popMatrix();
}
I'm trying to create a heightmap colored by face, instead of vertex. For example, this is what I currently have:
But this is what I want:
I read that I have to split each vertex into multiple vertices, then index each separately for the triangles. I also know that blender has a function like this for its models (split vertices, or something?), but I'm not sure what kind of algorithm I would follow for this. This would be the last resort, because multiplying the amount of vertices in the mesh for no reason other than color doesn't seem efficient.
I also discovered something called flatshading (using the flat qualifier on the pixel color in the shaders), but it seems to only draw squares instead of triangles. Is there a way to make it shade triangles?
For reference, this is my current heightmap generation code:
public class HeightMap extends GameModel {
private static final float START_X = -0.5f;
private static final float START_Z = -0.5f;
private static final float REFLECTANCE = .1f;
public HeightMap(float minY, float maxY, float persistence, int width, int height, float spikeness) {
super(createMesh(minY, maxY, persistence, width, height, spikeness), REFLECTANCE);
}
protected static Mesh createMesh(final float minY, final float maxY, final float persistence, final int width,
final int height, float spikeness) {
SimplexNoise noise = new SimplexNoise(128, persistence, 2);// Utils.getRandom().nextInt());
float xStep = Math.abs(START_X * 2) / (width - 1);
float zStep = Math.abs(START_Z * 2) / (height - 1);
List<Float> positions = new ArrayList<>();
List<Integer> indices = new ArrayList<>();
for (int z = 0; z < height; z++) {
for (int x = 0; x < width; x++) {
// scale from [-1, 1] to [minY, maxY]
float heightY = (float) ((noise.getNoise(x * xStep * spikeness, z * zStep * spikeness) + 1f) / 2
* (maxY - minY) + minY);
positions.add(START_X + x * xStep);
positions.add(heightY);
positions.add(START_Z + z * zStep);
// Create indices
if (x < width - 1 && z < height - 1) {
int leftTop = z * width + x;
int leftBottom = (z + 1) * width + x;
int rightBottom = (z + 1) * width + x + 1;
int rightTop = z * width + x + 1;
indices.add(leftTop);
indices.add(leftBottom);
indices.add(rightTop);
indices.add(rightTop);
indices.add(leftBottom);
indices.add(rightBottom);
}
}
}
float[] verticesArr = Utils.listToArray(positions);
Color c = new Color(147, 105, 59);
float[] colorArr = new float[positions.size()];
for (int i = 0; i < colorArr.length; i += 3) {
float brightness = (Utils.getRandom().nextFloat() - 0.5f) * 0.5f;
colorArr[i] = (float) c.getRed() / 255f + brightness;
colorArr[i + 1] = (float) c.getGreen() / 255f + brightness;
colorArr[i + 2] = (float) c.getBlue() / 255f + brightness;
}
int[] indicesArr = indices.stream().mapToInt((i) -> i).toArray();
float[] normalArr = calcNormals(verticesArr, width, height);
return new Mesh(verticesArr, colorArr, normalArr, indicesArr);
}
private static float[] calcNormals(float[] posArr, int width, int height) {
Vector3f v0 = new Vector3f();
Vector3f v1 = new Vector3f();
Vector3f v2 = new Vector3f();
Vector3f v3 = new Vector3f();
Vector3f v4 = new Vector3f();
Vector3f v12 = new Vector3f();
Vector3f v23 = new Vector3f();
Vector3f v34 = new Vector3f();
Vector3f v41 = new Vector3f();
List<Float> normals = new ArrayList<>();
Vector3f normal = new Vector3f();
for (int row = 0; row < height; row++) {
for (int col = 0; col < width; col++) {
if (row > 0 && row < height - 1 && col > 0 && col < width - 1) {
int i0 = row * width * 3 + col * 3;
v0.x = posArr[i0];
v0.y = posArr[i0 + 1];
v0.z = posArr[i0 + 2];
int i1 = row * width * 3 + (col - 1) * 3;
v1.x = posArr[i1];
v1.y = posArr[i1 + 1];
v1.z = posArr[i1 + 2];
v1 = v1.sub(v0);
int i2 = (row + 1) * width * 3 + col * 3;
v2.x = posArr[i2];
v2.y = posArr[i2 + 1];
v2.z = posArr[i2 + 2];
v2 = v2.sub(v0);
int i3 = (row) * width * 3 + (col + 1) * 3;
v3.x = posArr[i3];
v3.y = posArr[i3 + 1];
v3.z = posArr[i3 + 2];
v3 = v3.sub(v0);
int i4 = (row - 1) * width * 3 + col * 3;
v4.x = posArr[i4];
v4.y = posArr[i4 + 1];
v4.z = posArr[i4 + 2];
v4 = v4.sub(v0);
v1.cross(v2, v12);
v12.normalize();
v2.cross(v3, v23);
v23.normalize();
v3.cross(v4, v34);
v34.normalize();
v4.cross(v1, v41);
v41.normalize();
normal = v12.add(v23).add(v34).add(v41);
normal.normalize();
} else {
normal.x = 0;
normal.y = 1;
normal.z = 0;
}
normal.normalize();
normals.add(normal.x);
normals.add(normal.y);
normals.add(normal.z);
}
}
return Utils.listToArray(normals);
}
}
Edit
I've tried doing a couple things. I tried rearranging the indices with flat shading, but that didn't give me the look I wanted. I tried using a uniform vec3 colors and indexing it with gl_VertexID or gl_InstanceID (I'm not entirely sure the difference), but I couldn't get the arrays to compile.
Here is the github repo, by the way.
flat qualified fragment shader inputs will receive the same value for the same primitive. In your case, a triangle.
Of course, a triangle is composed of 3 vertices. And if the vertex shaders output 3 different values, how does the fragment shader know which value to get?
This comes down to what is called the "provoking vertex." When you render, you specify a particular primitive to use in your glDraw* call (GL_TRIANGLE_STRIP, GL_TRIANGLES, etc). These primitive types will generate a number of base primitives (ie: single triangle), based on how many vertices you provided.
When a base primitive is generated, one of the vertices in that base primitive is said to be the "provoking vertex". It is that vertex's data that is used for all flat parameters.
The reason you're seeing what you are seeing is because the two adjacent triangles just happen to be using the same provoking vertex. Your mesh is smooth, so two adjacent triangles share 2 vertices. Your mesh generation just so happens to be generating a mesh such that the provoking vertex for each triangle is shared between them. Which means that the two triangles will get the same flat value.
You will need to adjust your index list or otherwise alter your mesh generation so that this doesn't happen. Or you can just divide your mesh into individual triangles; that's probably much easier.
As a final resort, I just duplicated the vertices, and it seems to work. I haven't been able to profile it to see if it makes a big performance drop. I'd be open to any other suggestions!
for (int z = 0; z < height; z++) {
for (int x = 0; x < width; x++) {
// scale from [-1, 1] to [minY, maxY]
float heightY = (float) ((noise.getNoise(x * xStep * spikeness, z * zStep * spikeness) + 1f) / 2
* (maxY - minY) + minY);
positions.add(START_X + x * xStep);
positions.add(heightY);
positions.add(START_Z + z * zStep);
positions.add(START_X + x * xStep);
positions.add(heightY);
positions.add(START_Z + z * zStep);
}
}
for (int z = 0; z < height - 1; z++) {
for (int x = 0; x < width - 1; x++) {
int leftTop = z * width + x;
int leftBottom = (z + 1) * width + x;
int rightBottom = (z + 1) * width + x + 1;
int rightTop = z * width + x + 1;
indices.add(2 * leftTop);
indices.add(2 * leftBottom);
indices.add(2 * rightTop);
indices.add(2 * rightTop + 1);
indices.add(2 * leftBottom + 1);
indices.add(2 * rightBottom + 1);
}
}
I need to create a simple Java program, that draws a bezier curve pixel by pixel through any amount of points. At the moment, everything seems to be ok except that the curve always ends at x=0 y=0 coordinates.
Screenshot 1
Screenshot 2
I need it to end at the last point. My brain is not quite working today, so I'm looking for some help.
Here is what I have:
private void drawScene(){
precision = Float.parseFloat(this.jTextField4.getText());
//Clears the screen and draws X and Y lines
g.setColor(Color.white);
g.fillRect(0, 0, pWidth, pHeight);
g.setColor(Color.gray);
g.drawLine(0, offsetY, pWidth, offsetY);
g.drawLine(offsetX, 0, offsetX, pHeight);
//Drawing the points
if(pointCount > 0){
for(int i = 0;i<pointCount;i++){
g.setColor(Color.red);
g.drawString(String.valueOf(i+1), points[i].x + offsetX, points[i].y - 6 + offsetY);
g.drawOval(points[i].x + offsetX, points[i].y - 6 + offsetY, 3, 3);
}
}
//Drawing the curve
if(pointCount > 1){
float t = 0;
while(t <= 1){
g.setColor(Color.gray);
this.besierCurvePixel(t);
t += precision;
}
}
}
//Factorial
private static int fact(int n) {
int fact = 1;
for (int i = 1; i <= n; i++) {
fact *= i;
}
return fact;
}
//Bernstein polynomial
private static double bernstein(float t, int n, int i){
return (fact(n) / (fact(i) * fact(n-i))) * Math.pow(1-t, n-i) * Math.pow(t, i);
}
private void besierCurvePixel(float t){
double bPoly[] = new double[pointCount];
for(int i = 0; i < pointCount; i++){
bPoly[i] = bernstein(t, pointCount, i+1);
}
double sumX = 0;
double sumY = 0;
for(int i = 0; i < pointCount; i++){
sumX += bPoly[i] * points[i].x;
sumY += bPoly[i] * points[i].y;
}
int x, y;
x = (int) Math.round(sumX);
y = (int) Math.round(sumY);
g.drawLine(x + offsetX, y + offsetY, x + offsetX, y + offsetY);
}
This is the method for adding the points (pointCount is 0 initially):
points[pointCount] = new Point();
points[pointCount].x = evt.getX() - this.offsetX;
points[pointCount].y = evt.getY() - this.offsetY;
pointCount++;
this.drawScene();
The problem was here
for(int i = 0; i < pointCount; i++){
bPoly[i] = bernstein(t, pointCount, i+1);
}
The second parameter in the bernstein method was incorrect. Basically If I have 3 points, it should be 2 not 3;
bPoly[i] = bernstein(t, pointCount-1, i+1);
Where does "pointcount" get set (and to what)?
Have you tried stepping through your code to see why it continues after reaching the last point?
Is it possible that you are stepping through a loop 1 extra time, which is why the last point would have a destination set to (0,0)?
Could you set the number of steps for the app to make to each point?
Hopefully I am bringing up points to help you find your answer
*Edit: If I had to guess- you are accidentally adding an additional point of (0,0) to points[]; Here is where I am seeing it go to (0,0) after the last point:
for(int i = 0; i < pointCount; i++){
sumX += bPoly[i] * **points[i]**.x;
sumY += bPoly[i] * **points[i]**.y;
}
Edit: Glad you were able to fix it, and hopefully i helped with finding that issue. Best of luck in the future!
I am trying to find image in an image. I do this for desktop automation. At this moment, I'm trying to be fast, not precise. As such, I have decided to match similar image solely based on the same average color.
If I pick several icons on my desktop, for example:
And I will search for the last one (I'm still wondering what this file is):
You can clearly see what is most likely to be the match:
In different situations, this may not work. However when image size is given, it should be pretty reliable and lightning fast.
I can get a screenshot as BufferedImage object:
MSWindow window = MSWindow.windowFromName("Firefox", false);
BufferedImage img = window.screenshot();
//Or, if I can estimate smaller region for searching:
BufferedImage img2 = window.screenshotCrop(20,20,50,50);
Of course, the image to search image will be loaded from template saved in a file:
BufferedImage img = ImageIO.read(...whatever goes in there, I'm still confused...);
I explained what all I know so that we can focus on the only problem:
Q: How can I get average color on buffered image? How can I get such average color on sub-rectangle of that image?
Speed wins here. In this exceptional case, I consider it more valuable than code readability.
I think that no matter what you do, you are going to have an O(wh) operation, where w is your width and h is your height.
Therefore, I'm going to post this (naive) solution to fulfil the first part of your question as I do not believe there is a faster solution.
/*
* Where bi is your image, (x0,y0) is your upper left coordinate, and (w,h)
* are your width and height respectively
*/
public static Color averageColor(BufferedImage bi, int x0, int y0, int w,
int h) {
int x1 = x0 + w;
int y1 = y0 + h;
long sumr = 0, sumg = 0, sumb = 0;
for (int x = x0; x < x1; x++) {
for (int y = y0; y < y1; y++) {
Color pixel = new Color(bi.getRGB(x, y));
sumr += pixel.getRed();
sumg += pixel.getGreen();
sumb += pixel.getBlue();
}
}
int num = w * h;
return new Color(sumr / num, sumg / num, sumb / num);
}
There is a constant time method for finding the mean colour of a rectangular section of an image but it requires a linear preprocess. This should be fine in your case. This method can also be used to find the mean value of a rectangular prism in a 3d array or any higher dimensional analog of the problem. I will be using a gray scale example but this can be easily extended to 3 or more channels simply by repeating the process.
Lets say we have a 2 dimensional array of numbers we will call "img".
The first step is to generate a new array of the same dimensions where each element contains the sum of all values in the original image that lie within the rectangle that bounds that element and the top left element of the image.
You can use the following method to construct such an image in linear time:
int width = 1920;
int height = 1080;
//source data
int[] img = GrayScaleScreenCapture();
int[] helperImg = int[width * height]
for(int y = 0; y < height; ++y)
{
for(int x = 0; x < width; ++x)
{
int total = img[y * width + x];
if(x > 0)
{
//Add value from the pixel to the left in helperImg
total += helperImg[y * width + (x - 1)];
}
if(y > 0)
{
//Add value from the pixel above in helperImg
total += helperImg[(y - 1) * width + x];
}
if(x > 0 && y > 0)
{
//Subtract value from the pixel above and to the left in helperImg
total -= helperImg[(y - 1) * width + (x - 1)];
}
helperImg[y * width + x] = total;
}
}
Now we can use helperImg to find the total of all values within a given rectangle of img in constant time:
//Some Rectangle with corners (x0, y0), (x1, y0) , (x0, y1), (x1, y1)
int x0 = 50;
int x1 = 150;
int y0 = 25;
int y1 = 200;
int totalOfRect = helperImg[y1 * width + x1];
if(x0 > 0)
{
totalOfRect -= helperImg[y1 * width + (x0 - 1)];
}
if(y0 > 0)
{
totalOfRect -= helperImg[(y0 - 1) * width + x1];
}
if(x0 > 0 && y0 > 0)
{
totalOfRect += helperImg[(y0 - 1) * width + (x0 - 1)];
}
Finally, we simply divide totalOfRect by the area of the rectangle to get the mean value:
int rWidth = x1 - x0 + 1;
int rheight = y1 - y0 + 1;
int meanOfRect = totalOfRect / (rWidth * rHeight);
Here's a version based on k_g's answer for a full BufferedImage with adjustable sample precision (step).
public static Color getAverageColor(BufferedImage bi) {
int step = 5;
int sampled = 0;
long sumr = 0, sumg = 0, sumb = 0;
for (int x = 0; x < bi.getWidth(); x++) {
for (int y = 0; y < bi.getHeight(); y++) {
if (x % step == 0 && y % step == 0) {
Color pixel = new Color(bi.getRGB(x, y));
sumr += pixel.getRed();
sumg += pixel.getGreen();
sumb += pixel.getBlue();
sampled++;
}
}
}
int dim = bi.getWidth()*bi.getHeight();
// Log.info("step=" + step + " sampled " + sampled + " out of " + dim + " pixels (" + String.format("%.1f", (float)(100*sampled/dim)) + " %)");
return new Color(Math.round(sumr / sampled), Math.round(sumg / sampled), Math.round(sumb / sampled));
}
I am trying to properly rotate a sword in my 2D game. I have a sword image file, and I wish to rotate the image at the player's location. I tried using Graphics2D and AffineTransform, but the problem is that the player moves on a different coordinate plane, the Screen class, and the Graphics uses the literal location of the pixels on the JFrame. So, I realized that I need to render the sword by rotating the image itself, and then saving it into a pixel array for my screen class to render. However, I don't know how to do this. Here is the code for my screen rendering method:
public void render(double d, double yOffset2, BufferedImage image, int colour,
int mirrorDir, double scale, SpriteSheet sheet) {
d -= xOffset;
yOffset2 -= yOffset;
boolean mirrorX = (mirrorDir & BIT_MIRROR_X) > 0;
boolean mirrorY = (mirrorDir & BIT_MIRROR_Y) > 0;
double scaleMap = scale - 1;
for (int y = 0; y < image.getHeight(); y++) {
int ySheet = y;
if (mirrorY)
ySheet = image.getHeight() - 1 - y;
int yPixel = (int) (y + yOffset2 + (y * scaleMap) - ((scaleMap * 8) / 2));
for (int x = 0; x < image.getWidth(); x++) {
int xPixel = (int) (x + d + (x * scaleMap) - ((scaleMap * 8) / 2));
int xSheet = x;
if (mirrorX)
xSheet = image.getWidth() - 1 - x;
int col = (colour >> (sheet.pixels[xSheet + ySheet
* sheet.width])) & 255;
if (col < 255) {
for (int yScale = 0; yScale < scale; yScale++) {
if (yPixel + yScale < 0 || yPixel + yScale >= height)
continue;
for (int xScale = 0; xScale < scale; xScale++) {
if (x + d < 0 || x + d >= width)
continue;
pixels[(xPixel + xScale) + (yPixel + yScale)
* width] = col;
}
}
}
}
}
}
Here is one of my poor attempts to call the render method from the Sword Class:
public void render(Screen screen) {
AffineTransform at = new AffineTransform();
at.rotate(1, image.getWidth() / 2, image.getHeight() / 2);
AffineTransformOp op = new AffineTransformOp(at,
AffineTransformOp.TYPE_BILINEAR);
image = op.filter(image, null);
screen.render(this.x, this.y, image, SwordColor, 1, 1.5, sheet);
hitBox.setLocation((int) this.x, (int) this.y);
for (Entity entity : level.getEntities()) {
if (entity instanceof Mob) {
if (hitBox.intersects(((Mob) entity).hitBox)) {
// ((Mob) entity).health--;
}
}
}
}
Thank you for any help you can provide, and please feel free to tell me if theres a better way to do this.
You can rotate() the image around an anchor point, also seen here in a Graphics2D context. The method concatenates translate(), rotate() and translate() operations, also seen here as explicit transformations.
Addendum: It rotates the image, but how do I save the pixels of the image as an array?
Once you filter() the image, use one of the ImageIO.write() methods to save the resulting RenderedImage, for example.