Creating a for loop with inputs in Java - java

Here is my code:
public static Picture modifyPicture (Picture p, int value)
{
// get width and height of the picture
int width = p.getWidth();
int height = p.getHeight();
System.out.println ("Picture has width of " + width +
" and height of " + height);
Picture p2 = new Picture (width, height*value);
int x = -1;
int y = -1;
for ( x = 0 ; x < width ; ++x )
{
for ( y = 0 ; y < height ; ++y )
{
Pixel pixel1 = p.getPixel (x,y);
Color c1 = pixel1.getColor();
Pixel pixel4 = p2.getPixel (x, y);
pixel4.setColor( c1 );
Pixel pixel5 = p2.getPixel (x, y + 1 * height);
pixel5.setColor( c1 );
Pixel pixel6 = p2.getPixel (x, y + 2 * height);
pixel6.setColor( c1 );
}
}
return p2;
} // end of method
Hey guys, this seems like a simple problem but I cannot figure out how I could do this. I want to somehow loop the items inside of the for loop. What the code does is places pictures vertically on top of each other. This code works for 3 pictures (value = 3), but if I wanted less or more pictures, I have to keep adding/removing a new line i.e. Pixel pixel7 etc.
I need a way to loop this according to the number in value. I'm not looking for you to write my code for me, just give me some idea how I could do this. Thank you for your time and help!

You have to modify your method to loop value times for each pixel:
public static Picture modifyPicture (Picture p, int value)
{
// get width and height of the picture
int width = p.getWidth();
int height = p.getHeight();
System.out.println ("Picture has width of " + width +
" and height of " + height);
Picture p2 = new Picture (width, height*value);
for (int x = 0 ; x < width ; ++x )
{
for (int y = 0 ; y < height ; ++y )
{
Pixel pixel1 = p.getPixel (x,y);
Color c1 = pixel1.getColor();
for (int j=1; j < value; j++) {
Pixel pixel = p2.getPixel (x, y + j * height);
pixel.setColor( c1 );
}
}
}
return p2;
} // end of method

Related

Cycle through all the pixels and obtain the number of black and white pixels in an image

could you help me, you can tell me what I'm wrong with, my code has 2 for's that run through a whole 157x127 pixel image, from which I get black and white through RGB; in turn I have 2 counters of those two colors.
As I understand it, if the 157x127 measurement gives 19,939 pixels, of which there is black and white; but adding the counters gives me less (14218 for white and 5643 for black). Where am I wrong or is my logic wrong?
Add the image:
enter image description here
The application does not close itself
Code below:
BitmapDrawable drawable = (BitmapDrawable) imgviewREXMorphologEX.getDrawable();
Bitmap bitmap = drawable.getBitmap();
int Width = bitmap.getWidth();
int Height = bitmap.getHeight();
int contadorB = 0;
int contadorN = 0;
for (int x = 0; x <= Width - 1; x++) {
int xi = x;
for (int y = 0; y <= Height - 1; y++) {
int yi = y;
int coordenada = bitmap.getPixel(xi, yi);// pixel
double r = Color.red(coordenada);
double g = Color.green(coordenada);
double b = Color.blue(coordenada);
if (r == 255 && g == 255 && b == 255) {
contadorB++;
} else if (r == 0 && g == 0 && b == 0) {
contadorN++;
}
txtviewcolor3.setText(contadorB + " - " + contadorN + " - " + (contadorB + contadorN) + " - " + (Width * Height));
}
}
Add an else clause to your if statements, and see if there are actually some other colours in the image?
Your image is not just black or white!
See here an amplified section (800%, bottom right):
There are some pixels that are gray (brighter and darker ones). Depending on purpose, some limits can help (e.g. r < MIN && g < MIN && b < 10) or using the Formula to determine brightness of RGB color compared to some limit(s)

Converting monochrome image to minimum number of 2d shapes

Basically, what I need to do is take a 2d array of bitflags and produce a list of 2d rectangles to fill the entire area with the minimum number of total shapes required to perfectly fill the space. I am doing this to convert a 2d top-down monochrome of a map into 2d rectangle shapes which perfectly represent the passed in image which will be used to generate a platform in a 3d world. I need to minimize the total number of shapes used, because each shape will represent a separate object, and flooding it with 1 unit sized squares for each pixel would be highly inefficient for that engine.
So far I have read in the image, processed it, and filled a two dimensional array of booleans which tells me if the pixel should be filled or unfilled, but I am unsure of the most efficient approach of continuing.
Here is what I have so far, as reference, if you aren't following:
public static void main(String[] args) {
File file = new File(args[0]);
BufferedImage bi = null;
try {
bi = ImageIO.read(file);
} catch (IOException ex) {
Logger.global.log(Level.SEVERE, null, ex);
}
if (bi != null) {
int[] rgb = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), new int[bi.getWidth() * bi.getHeight()], 0, bi.getWidth());
Origin origin = new Origin(bi.getWidth() / 2, bi.getHeight() / 2);
boolean[][] flags = new boolean[bi.getWidth()][bi.getHeight()];
for (int y = 0; y < bi.getHeight(); y++) {
for (int x = 0; x < bi.getWidth(); x++) {
int index = y * bi.getWidth() + x;
int color = rgb[index];
int type = color == Color.WHITE.getRGB() ? 1 : (color == Color.RED.getRGB() ? 2 : 0);
if (type == 2) {
origin = new Origin(x, y);
}
flags[x][y] = type != 1;
}
}
List<Rectangle> list = new ArrayList();
//Fill list with rectangles
}
}
White represents no land. Black or Red represents land. The check for the red pixel marks the origin position of map, which was just for convenience and the rectangles will be offset by the origin position if it is found.
Edit: The processing script does not need to be fast, the produced list of rectangles will be dumped and that will be what will be imported and used later, so the processing of the image does not need to be particularly optimized, it doesn't make a difference.
I also just realized that expecting a 'perfect' solution is expecting too much, since this would qualify as a 'knapsack problem' of the multidimensionally constrained variety, if I am expecting exactly the fewest number of rectangles, so simply an algorithm that produces a minimal number of rectangles will suffice.
Here is a reference image for completion:
Edit 2: It doesn't look like this is such an easy thing to answer given no feedback yet, but I have started making progress, but I am sure I am missing something that would vastly reduce the number of rectangles. Here is the updated progress:
static int mapWidth;
static int mapHeight;
public static void main(String[] args) {
File file = new File(args[0]);
BufferedImage bi = null;
System.out.println("Reading image...");
try {
bi = ImageIO.read(file);
} catch (IOException ex) {
Logger.global.log(Level.SEVERE, null, ex);
}
if (bi != null) {
System.out.println("Complete!");
System.out.println("Interpreting image...");
mapWidth = bi.getWidth();
mapHeight = bi.getHeight();;
int[] rgb = bi.getRGB(0, 0, mapWidth, mapHeight, new int[mapWidth * mapHeight], 0, mapWidth);
Origin origin = new Origin(mapWidth / 2, mapHeight / 2);
boolean[][] flags = new boolean[mapWidth][mapHeight];
for (int y = 0; y < mapHeight; y++) {
for (int x = 0; x < mapWidth; x++) {
int index = y * mapWidth + x;
int color = rgb[index];
int type = color == Color.WHITE.getRGB() ? 1 : (color == Color.RED.getRGB() ? 2 : 0);
if (type == 2) {
origin = new Origin(x, y);
}
flags[x][y] = type != 1;
}
}
System.out.println("Complete!");
System.out.println("Processing...");
//Get Rectangles to fill space...
List<Rectangle> rectangles = getRectangles(flags, origin);
System.out.println("Complete!");
float rectangleCount = rectangles.size();
float totalCount = mapHeight * mapWidth;
System.out.println("Total units: " + (int)totalCount);
System.out.println("Total rectangles: " + (int)rectangleCount);
System.out.println("Rectangle reduction factor: " + ((1 - rectangleCount / totalCount) * 100.0) + "%");
System.out.println("Dumping data...");
try {
file = new File(file.getParentFile(), file.getName() + "_Rectangle_Data.txt");
if(file.exists()){
file.delete();
}
file.createNewFile();
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(file)));
for(Rectangle rect: rectangles){
bw.write(rect.x + "," + rect.y + "," + rect.width + ","+ rect.height + "\n");
}
bw.flush();
bw.close();
} catch (Exception ex) {
Logger.global.log(Level.SEVERE, null, ex);
}
System.out.println("Complete!");
}else{
System.out.println("Error!");
}
}
public static void clearRange(boolean[][] flags, int xOff, int yOff, int width, int height) {
for (int y = yOff; y < yOff + height; y++) {
for (int x = xOff; x < xOff + width; x++) {
flags[x][y] = false;
}
}
}
public static boolean checkIfFilled(boolean[][] flags, int xOff, int yOff, int width, int height) {
for (int y = yOff; y < yOff + height; y++) {
for (int x = xOff; x < xOff + width; x++) {
if (!flags[x][y]) {
return false;
}
}
}
return true;
}
public static List<Rectangle> getRectangles(boolean[][] flags, Origin origin) {
List<Rectangle> rectangles = new ArrayList();
for (int y = 0; y < mapHeight; y++) {
for (int x = 0; x < mapWidth; x++) {
if (flags[x][y]) {
int maxWidth = 1;
int maxHeight = 1;
Loop:
//The search size limited to 400x400 so it will complete some time this century.
for (int w = Math.min(400, mapWidth - x); w > 1; w--) {
for (int h = Math.min(400, mapHeight - y); h > 1; h--) {
if (w * h > maxWidth * maxHeight) {
if (checkIfFilled(flags, x, y, w, h)) {
maxWidth = w;
maxHeight = h;
break Loop;
}
}
}
}
//Search also in the opposite direction
Loop:
for (int h = Math.min(400, mapHeight - y); h > 1; h--) {
for (int w = Math.min(400, mapWidth - x); w > 1; w--) {
if (w * h > maxWidth * maxHeight) {
if (checkIfFilled(flags, x, y, w, h)) {
maxWidth = w;
maxHeight = h;
break Loop;
}
}
}
}
rectangles.add(new Rectangle(x - origin.x, y - origin.y, maxWidth, maxHeight));
clearRange(flags, x, y, maxWidth, maxHeight);
}
}
}
return rectangles;
}
My current code's search for larger rectangles is limited to 400x400 to speed up testing, and outputs 17,979 rectangles, which is a 99.9058% total reduction of rectangles if I treated each pixel as a 1x1 square(19,095,720 pixels). So far so good.

Color data is no being stored properly

I am attempting to make a rendering system with a depth map involved with the usual pixels for dealing with alpha. My problem is that no color is being set correctly! I have tried to debug using System.out.println and testing various components, but to no avail I have not found a solution.
The Variables
The variables that are involved with dealing with drawing, setting, and clearing of pixels are: private int[][] node, private int[] pixels, and private ArrayList<Integer> changedPixels.
private int[][] node deals with storing pixels and dealing with depth [depth][x + y * width] before transferring over to the BufferedImage pixels. The data is set to a clear black and the lowest depth it is a fully visible black.
private int[] pixels is the data from a BufferedImage to change it up, it is the only image every used! All data is by default fully visible black
private ArrayList<Integer> changedPixels deals with pixels that are there from the last frame so as to help boost FPS by not clearing the entire screen if not needed. Empty by default since not pixels were changed from a previous frame.
The Methods
I have several methods for the rendering system: setNode(int x, int y, int z, int color, int alpha, drawScreen(), and clearScreen(). I also have a drawing rectangle and sprite function which deals with adding pixels by calling the setNode() method to add in colors.
private void setNode(int x, int y, int z, int color, float alpha)
{
color = Pixel.getColor(alpha, color);
if (translate) // Move the pixel to the correct location
{
x -= transX;
y -= transY;
}
if (x < 0 || x >= width || y < 0 || y >= height || alpha <= 0.0f || nodeMap[z][x + y * width] == color) // Check if we need to draw the pixel
return;
for (int zz = z + 1; zz < maxDepth; zz++)
if (Pixel.getAlpha(nodeMap[zz][x + y * width]) >= 1)
return;
if (alpha < 1.0f) // If pixel isn't completely opaque, then set it's alpha to the given one
if (nodeMap[z][x + y * width] != color) // If color isn't equal to the one we supply, change it up correctly
color = Pixel.getColorBlend(color, nodeMap[z][x + y * width]);
if (color == Pixel.WHITE) System.out.println("Pixel is white at x: " + x + ", y: " + y);
nodeMap[z][x + y * width] = color;
}
public void drawScreen()
{
int color = clearColor;
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++)
{
for (int z = maxDepth - 1; z > 0; z--)
{
if (Pixel.getAlpha(nodeMap[z][x + y * width]) > 0f)
color = Pixel.getColorBlend(color, nodeMap[z][x + y * width]);
if (Pixel.getAlpha(color) >= 1f)
break;
}
if (pixels[x + y * width] != color)
{
pixels[x + y * width] = color;
changedPixels.add(x + y * width);
}
}
}
public void clearScreen()
{
for (Integer pixel : changedPixels)
{
for (int z = 0; z < maxDepth; z++)
{
if (z > 0)
nodeMap[z][pixel] = clearColor;
else
nodeMap[z][pixel] = bgColor;
}
}
changedPixels.clear();
}
public void drawRect(int offX, int offY, int z, int width, int height, int color)
{
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++)
setNode(x + offX, y + offY, z, color);
}
public static int getColorBlend(int color1, int color2)
{
float a1 = getAlpha(color1);
float a2 = getAlpha(color2);
float a = Math.max(a1, a2);
float r = ((getRed(color1) * a1) + (getRed(color2) * a2 * (1 - a1))) / a;
float g = ((getGreen(color1) * a1) + (getGreen(color2) * a2 * (1 - a1))) / a;
float b = ((getBlue(color1) * a1) + (getBlue(color2) * a2 * (1 - a1))) / a;
return Pixel.getColor(a, r, g, b);
}
The Test
What I do currently is initalize the rendering system and set the nodeMap and pixel map to the previously mentioned settings. After this has been completed a game engine begins and then a method in a gui button (you might need it), but it calls drawRect(0(x), 0(y), 1(z), 100(width), 20(height), Pixel.WHITE(color)) which works as I have testing to see if it's running the method and which pixels it's drawing to.
The Problem
The overall problem is that the screen is completely white, I can't quite figure out the reason! I do know it has nothing with the alpha blending, which works fine as I have used it will a previous version of a rendering system I did.
Any help is appreciate and sorry that this is quite a long question, I just wanted to make sure you had everything you may need help me solve this. I do realize this is not be very effiecent, but I still like the system. Thanks again!

Java 8 + Swing: How to Draw Flush Polygons

(Sorry for the long post... at least it has pictures?)
I have written an algorithm that creates a mosaic from an image by statistically generating N convex polygons that cover the image with no overlap. These polygons have anywhere between 3-8 sides, and each side has an angle that is a multiple of 45 degrees. These polygons are stored internally as a rectangle with displacements for each corner. Below is an image that explains how this works:
getRight() returns x + width - 1, and getBottom() returns y + height - 1. The class is designed to maintain a tight bounding box around filled pixels so the coordinates shown in this image are correct. Note that width >= ul + ur + 1, width >= ll + lr + 1, height >= ul + ll + 1, and height >= ur + ul + 1, or there would be empty pixels on a side. Note also that it is possible for a corner's displacement to be 0, thus indicating all pixels are filled in that corner. This enables this representation to store 3-8 sided convex polygons, each of whose sides are at least one pixel in length.
While it's nice to mathematically represent these regions, I want to draw them so I can see them. Using a simple lambda and a method that iterates over each pixel in the polygon, I can render the image perfectly. As an example, below is Claude Monet's Woman with a Parasol using 99 polygons allowing all split directions.
The code that renders this image looks like this:
public void drawOnto(Graphics graphics) {
graphics.setColor(getColor());
forEach(
(i, j) -> {
graphics.fillRect(x + i, y + j, 1, 1);
}
);
}
private void forEach(PerPixel algorithm) {
for (int j = 0; j < height; ++j) {
int nj = height - 1 - j;
int minX;
if (j < ul) {
minX = ul - j;
} else if (nj < ll) {
minX = ll - nj;
} else {
minX = 0;
}
int maxX = width;
if (j < ur) {
maxX -= ur - j;
} else if (nj < lr) {
maxX -= lr - nj;
}
for (int i = minX; i < maxX; ++i) {
algorithm.perform(i, j);
}
}
}
However, this is not ideal for many reasons. First, the concept of graphically representing a polygon is now part of the class itself; it is better to allow other classes whose focus is to represent these polygons. Second, this entails many, many calls to fillRect() to draw a single pixel. Finally, I want to be able to develop other methods of rendering these polygons than drawing them as-is (for example, performing weighted interpolation over the Voronoi tessellation represented by the polygons' centers).
All of these point to generating a java.awt.Polygon that represents the vertices of the polygon (which I named Region to differentiate from the Polygon class). No problem; I wrote a method to generate a Polygon that has the corners above with no duplicates to handle the cases that a displacement is 0 or that a side has only one pixel on it:
public Polygon getPolygon() {
int[] xes = {
x + ul,
getRight() - ur,
getRight(),
getRight(),
getRight() - lr,
x + ll,
x,
x
};
int[] yes = {
y,
y,
y + ur,
getBottom() - lr,
getBottom(),
getBottom(),
getBottom() - ll,
y + ul
};
int[] keptXes = new int[8];
int[] keptYes = new int[8];
int length = 0;
for (int i = 0; i < 8; ++i) {
if (
length == 0 ||
keptXes[length - 1] != xes[i] ||
keptYes[length - 1] != yes[i]
) {
keptXes[length] = xes[i];
keptYes[length] = yes[i];
length++;
}
}
return new Polygon(keptXes, keptYes, length);
}
The problem is that, when I try to use such a Polygon with the Graphics.fillPolygon() method, it does not fill all of the pixels! Below is the same mosaic rendered with this different method:
So I have a few related questions about this behavior:
Why does the Polygon class not fill in all these pixels, even though the angles are simple multiples of 45 degrees?
How can I consistently code around this defect (as far as my application is concerned) in my renderers so that I can use my getPolygon() method as-is? I do not want to change the vertices it outputs because I need them to be precise for center-of-mass calculations.
MCE
If the above code snippets and pictures are not enough to help explain the problem, I have added a Minimal, Complete, and Verifiable Example that demonstrates the behavior I described above.
package com.sadakatsu.mce;
import java.awt.Color;
import java.awt.Graphics;
import java.awt.Polygon;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class Main {
#FunctionalInterface
private static interface PerPixel {
void perform(int x, int y);
}
private static class Region {
private int height;
private int ll;
private int lr;
private int width;
private int ul;
private int ur;
private int x;
private int y;
public Region(
int x,
int y,
int width,
int height,
int ul,
int ur,
int ll,
int lr
) {
if (
width < 0 || width <= ll + lr || width <= ul + ur ||
height < 0 || height <= ul + ll || height <= ur + lr ||
ul < 0 ||
ur < 0 ||
ll < 0 ||
lr < 0
) {
throw new IllegalArgumentException();
}
this.height = height;
this.ll = ll;
this.lr = lr;
this.width = width;
this.ul = ul;
this.ur = ur;
this.x = x;
this.y = y;
}
public Color getColor() {
return Color.BLACK;
}
public int getBottom() {
return y + height - 1;
}
public int getRight() {
return x + width - 1;
}
public Polygon getPolygon() {
int[] xes = {
x + ul,
getRight() - ur,
getRight(),
getRight(),
getRight() - lr,
x + ll,
x,
x
};
int[] yes = {
y,
y,
y + ur,
getBottom() - lr,
getBottom(),
getBottom(),
getBottom() - ll,
y + ul
};
int[] keptXes = new int[8];
int[] keptYes = new int[8];
int length = 0;
for (int i = 0; i < 8; ++i) {
if (
length == 0 ||
keptXes[length - 1] != xes[i] ||
keptYes[length - 1] != yes[i]
) {
keptXes[length] = xes[i];
keptYes[length] = yes[i];
length++;
}
}
return new Polygon(keptXes, keptYes, length);
}
public void drawOnto(Graphics graphics) {
graphics.setColor(getColor());
forEach(
(i, j) -> {
graphics.fillRect(x + i, y + j, 1, 1);
}
);
}
private void forEach(PerPixel algorithm) {
for (int j = 0; j < height; ++j) {
int nj = height - 1 - j;
int minX;
if (j < ul) {
minX = ul - j;
} else if (nj < ll) {
minX = ll - nj;
} else {
minX = 0;
}
int maxX = width;
if (j < ur) {
maxX -= ur - j;
} else if (nj < lr) {
maxX -= lr - nj;
}
for (int i = minX; i < maxX; ++i) {
algorithm.perform(i, j);
}
}
}
}
public static void main(String[] args) throws IOException {
int width = 10;
int height = 8;
Region region = new Region(0, 0, 10, 8, 2, 3, 4, 1);
BufferedImage image = new BufferedImage(
width,
height,
BufferedImage.TYPE_3BYTE_BGR
);
Graphics graphics = image.getGraphics();
graphics.setColor(Color.WHITE);
graphics.fillRect(0, 0, width, height);
region.drawOnto(graphics);
ImageIO.write(image, "PNG", new File("expected.png"));
image = new BufferedImage(
width,
height,
BufferedImage.TYPE_3BYTE_BGR
);
graphics = image.getGraphics();
graphics.setColor(Color.WHITE);
graphics.fillRect(0, 0, width, height);
graphics.setColor(Color.BLACK);
graphics.fillPolygon(region.getPolygon());
ImageIO.write(image, "PNG", new File("got.png"));
}
}
I spent all day working on it, and I seem to have a fix for this. The clue was found in the documentation for the Shape class, which reads:
Definition of insideness: A point is considered to lie inside a Shape if and only if:
it lies completely inside theShape boundary or
it lies exactly on the Shape boundary and the space immediately adjacent to the point in the increasing X direction is entirely inside the boundary or
it lies exactly on a horizontal boundary segment and the space immediately adjacent to the point in the increasing Y direction is inside the boundary.
Actually, this text is a bit misleading; the third case overrides second (i.e., even if a pixel in a horizontal boundary segment on the bottom of a Shape has a filled point to its right, it still will not be filled). Represented pictorially, the Polygon below will not draw the x'ed out pixels:
The red, green, and blue pixels are part of the Polygon; the rest are not. The blue pixels fall under the first case, the green pixels fall under the second case, and the red pixels fall under the third case. Note that all of the rightmost and lowest pixels along the convex hull are NOT drawn. To get them to be drawn, you have to move the vertices to the orange pixels as shown to make a new rightmost/bottom-most portion of the convex hull.
The easiest way to do this is to use camickr's method: use both fillPolygon() and drawPolygon(). At least in the case of my 45-degree-multiple-edged convex hulls, drawPolygon() draws the lines to the vertices exactly (and probably for other cases as well), and thus will fill the pixels that fillPolygon() misses. However, neither fillPolygon() nor drawPolygon() will draw a single-pixel Polygon, so one has to code a special case to handle that.
The actual solution I developed in trying to understand the insideness definition above was to create a different Polygon with the modified corners as shown in the picture. It has the benefit (?) of calling the drawing library only once and automatically handles the special case. It probably is not actually optimal, but here is the code I used for anyone's consideration:
package com.sadakatsu.mosaic.renderer;
import java.awt.Polygon;
import java.util.Arrays;
import com.sadakatsu.mosaic.Region;
public class RegionPolygon extends Polygon {
public RegionPolygon(Region region) {
int bottom = region.getBottom();
int ll = region.getLL();
int lr = region.getLR();
int right = region.getRight();
int ul = region.getUL();
int ur = region.getUR();
int x = region.getX();
int y = region.getY();
int[] xes = {
x + ul,
right - ur + 1,
right + 1,
right + 1,
right - lr,
x + ll + 1,
x,
x
};
int[] yes = {
y,
y,
y + ur,
bottom - lr,
bottom + 1,
bottom + 1,
bottom - ll,
y + ul
};
npoints = 0;
xpoints = new int[xes.length];
ypoints = new int[xes.length];
for (int i = 0; i < xes.length; ++i) {
if (
i == 0 ||
xpoints[npoints - 1] != xes[i] ||
ypoints[npoints - 1] != yes[i]
) {
addPoint(xes[i], yes[i]);
}
}
}
}

Resize an Image drawn on screen with out changing the size of the RBG pixel array

This method sets the pixel color from one image to the other. How can i set the pixels from the imgPix array to the screen.pixels array so that the image appears larger on the screen.pixels array? I dumbed down the code to make the concept easy to understand.
public void drawSprite(Screen screen)
{
for(int y = 0; y < 16; y++)
{
for(int x = 0; x < 16; x++)
{
screen.pixels[x + y * screen.WIDTH] = this.imgPix[x + y * this.WIDTH];
}
}
}
A nice little trick that i discover is to cast to an int. this rounds down the number repeating the pattern..
// scale = 2
-------------y = 0,1,2,3,4,5,6,7,8,9 // as y increase.. y++
(int) y/scale = 0,0,1,1,2,2,3,3,4,4
//
// out of 10 numbers 5 were drawn this is scaling up
// As you can see from the above as y increase y/scale repeats with a the correct pattern
// this happends because casting the (int) rounds down.
//
// scale = 0.8
-------------y = 0,1,2,3,4,5,6,7,8,9
(int) y/scale = 0,1,2,3,5,6,7,8,10,11
//
// out of 10 numbers 2 were skipped this is scaling down an image
public void drawSprite(Screen screen,Image image,float scale)
{
for(int y = 0; y < image.height*scale; y++)
{
int scaleY = (int)(y/scale);
for(int x = 0; x < image.width*scale; x++)
{
int scaleX = (int)(x/scale);
screen.pixels[x + y * screen.WIDTH] = image.pixels[scaleX + scaleY * image.width];
}
}
}
I've answered this question before on programmers.stackexchange.com (similar enough to java to be relevant):
https://softwareengineering.stackexchange.com/questions/148123/what-is-the-algorithm-to-copy-a-region-of-one-bitmap-into-a-region-in-another/148153#148153
--
struct {
bitmap bmp;
float x, y, width, height;
} xfer_param;
scaled_xfer(xfer_param src, xfer_param det)
{
float src_dx = dst.width / src.width;
float src_dy = dst.height / src.height;
float src_maxx = src.x + src.width;
float src_maxy = src.y + src.height;
float dst_maxx = dst.x + dst.width;
float dst_maxy = dst.y + dst.height;
float src_cury = src.y;
for (float y = dst.y; y < dst_maxy; y++)
{
float src_curx = src.x;
for (float x = dst.x; x < dst_maxx; x++)
{
// Point sampling - you can also impl as bilinear or other
dst.bmp[x,y] = src.bmp[src_curx, src_cury];
src_curx += src_dx;
}
src_cury += src_dy;
}
}

Categories

Resources