I've been having trouble with an image interpolation method in Processing. This is the code I've come up with and I'm aware that it will throw an out of bounds exception since the outer loop goes further than the original image but how can I fix that?
PImage nearestneighbor (PImage o, float sf)
{
PImage out = createImage((int)(sf*o.width),(int)(sf*o.height),RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < sf*o.height; i++)
{
for (int j = 0; j < sf*o.width; j++)
{
int y = round((o.width*i)/sf);
int x = round(j / sf);
out.pixels[(int)((sf*o.width*i)+j)] = o.pixels[(y+x)];
}
}
out.updatePixels();
return out;
}
My idea was to divide both components that represent the point in the scaled image by the scale factor and round it in order to obtain the nearest neighbor.
For getting rid of the IndexOutOfBoundsException try caching the result of (int)(sf*o.width) and (int)(sf*o.height).
Additionally you might want to make sure that x and y don't leave the bounds, e.g. by using Math.min(...) and Math.max(...).
Finally, it should be int y = round((i / sf) * o.width; since you want to get the pixel in the original scale and then muliply with the original width. Example: Assume a 100x100 image and a scaling factor of 1.2. The scaled height would be 120 and thus the highest value for i would be 119. Now, round((119 * 100) / 1.2) yields round(9916.66) = 9917. On the other hand round(119 / 1.2) * 100 yields round(99.16) * 100 = 9900 - you have a 17 pixel difference here.
Btw, the variable name y might be misleading here, since its not the y coordinate but the index of the pixel at the coordinates (0,y), i.e. the first pixel at height y.
Thus your code might look like this:
int scaledWidth = (int)(sf*o.width);
int scaledHeight = (int)(sf*o.height);
PImage out = createImage(scaledWidth, scaledHeight, RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < scaledHeight; i++) {
for (int j = 0; j < scaledWidth; j++) {
int y = Math.min( round(i / sf), o.height ) * o.width;
int x = Math.min( round(j / sf), o.width );
out.pixels[(int)((scaledWidth * i) + j)] = o.pixels[(y + x)];
}
}
Related
(please don't mark this question as not clear, I spent a lot of time posting it ;) )
Okay, I am trying to make a simple 2d java game engine as a learning project, and part of it is rendering a filled polygon as a feature.
I am creating this algorithm my self, and I really can't figure out what I am doing wrong.
My though process is something like so:
Loop through every line, get the number of points in that line, then get the X location of every point in that line,
Then loop through the line again this time checking if the x in the loop is inside one of the lines in the points array, if so, draw it.
Disclaimer: the Polygon class is another type of mesh, and its draw method returns an int array with lines drawn through each vertex.
Disclaimer 2: I've tried other people's solutions but none really helped me and none really explained it properly (which is not the point in a learning project).
The draw methods are called one per frame.
FilledPolygon:
#Override
public int[] draw() {
int[] pixels = new Polygon(verts).draw();
int[] filled = new int[width * height];
for (int y = 0; y < height; y++) {
int count = 0;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
count++;
}
}
int[] points = new int[count];
int current = 0;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
points[current] = x;
current++;
}
}
if (count >= 2) {
int num = count;
if (count % 2 != 0)
num--;
for (int i = 0; i < num; i += 2) {
for (int x = points[i]; x < points[i+1]; x++) {
filled[x + y * width] = 0xffffffff;
}
}
}
}
return filled;
}
The Polygon class simply uses Bresenham's line algorithm and has nothing to do with the problem.
The game class:
#Override
public void load() {
obj = new EngineObject();
obj.addComponent(new MeshRenderer(new FilledPolygon(new int[][] {
{0,0},
{60, 0},
{0, 60},
{80, 50}
})));
((MeshRenderer)(obj.getComponent(MeshRenderer.class))).color = CYAN;
obj.transform.position.Y = 100;
}
The expected result is to get this shape filled up.(it was created using the polygon mesh):
The actual result of using the FilledPolygon mesh:
You code seems to have several problems and I will not focus on that.
Your approach based on drawing the outline then filling the "inside" runs cannot work in the general case because the outlines join at the vertices and intersections, and the alternation outside-edge-inside-edge-outside is broken, in an unrecoverable way (you can't know which segment to fill by just looking at a row).
You'd better use a standard polygon filling algorithm. You will find many descriptions on the Web.
For a simple but somewhat inefficient solution, work as follows:
process all lines between the minimum and maximum ordinates; let Y be the current ordinate;
loop on the edges;
assign every vertex a positive or negative sign if y ≥ Y or y < Y (mind the asymmetry !);
whenever the endpoints of an edge have a different sign, compute the intersection between the edge and the line;
you will get an even number of intersections; sort them horizontally;
draw between every other point.
You can get a more efficient solution by keeping a trace of which edges cross the current line, in a so-called "active list". Check the algorithms known as "scanline fill".
Note that you imply that pixels[] has the same width*height size as filled[]. Based on the mangled output, I would say that they are just not the same.
Otherwise if you just want to fill a scanline (assuming everything is convex), that code is overcomplicated, simply look for the endpoints and loop between them:
public int[] draw() {
int[] pixels = new Polygon(verts).draw();
int[] filled = new int[width * height];
for (int y = 0; y < height; y++) {
int left = -1;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
left = x;
break;
}
}
if (left >= 0) {
int right = left;
for (int x = width - 1; x > left; x--) {
if (pixels[x + y * width] == 0xffffffff) {
right = x;
break;
}
}
for (int x = left; x <= right; x++) {
filled[x + y * width] = 0xffffffff;
}
}
}
return filled;
}
However this kind of approach relies on having the entire polygon in the view, which may not always be the case in real life.
I had a quick question, and wondered if anyone had any ideas or libraries I could use for this. I am making a java game, and need to make 2d images concave. The problem is, 1: I don't know how to make an image concave. 2: I need the concave effect to be somewhat of a post process, think Oculus Rift. Everything is normal, but the camera of the player distorts the normal 2d images to look 3d. I am a Sophmore, so I don't know very complex math to accomplish this.
Thanks,
-Blue
If you're not using any 3D libraries or anything like that, just implement it as a simple 2D distortion. It doesn't have to be 100% mathematically correct as long as it looks OK. You can create a couple of arrays to store the distorted texture co-ordinates for your bitmap, which means you can pre-calculate the distortion once (which will be slow but only happens once) and then render multiple times using the pre-calculated values (which will be faster).
Here's a simple function using a power formula to generate a distortion field. There's nothing 3D about it, it just sucks in the center of the image to give a concave look:
int distortionU[][];
int distortionV[][];
public void computeDistortion(int width, int height)
{
// this will be really slow but you only have to call it once:
int halfWidth = width / 2;
int halfHeight = height / 2;
// work out the distance from the center in the corners:
double maxDistance = Math.sqrt((double)((halfWidth * halfWidth) + (halfHeight * halfHeight)));
// allocate arrays to store the distorted co-ordinates:
distortionU = new int[width][height];
distortionV = new int[width][height];
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
// work out the distortion at this pixel:
// find distance from the center:
int xDiff = x - halfWidth;
int yDiff = y - halfHeight;
double distance = Math.sqrt((double)((xDiff * xDiff) + (yDiff * yDiff)));
// distort the distance using a power function
double invDistance = 1.0 - (distance / maxDistance);
double distortedDistance = (1.0 - Math.pow(invDistance, 1.7)) * maxDistance;
distortedDistance *= 0.7; // zoom in a little bit to avoid gaps at the edges
// work out how much to multiply xDiff and yDiff by:
double distortionFactor = distortedDistance / distance;
xDiff = (int)((double)xDiff * distortionFactor);
yDiff = (int)((double)yDiff * distortionFactor);
// save the distorted co-ordinates
distortionU[x][y] = halfWidth + xDiff;
distortionV[x][y] = halfHeight + yDiff;
// clamp
if(distortionU[x][y] < 0)
distortionU[x][y] = 0;
if(distortionU[x][y] >= width)
distortionU[x][y] = width - 1;
if(distortionV[x][y] < 0)
distortionV[x][y] = 0;
if(distortionV[x][y] >= height)
distortionV[x][y] = height - 1;
}
}
}
Call it once passing the size of the bitmap that you want to distort. You can play around with the values or use a totally different formula to get the effect you want. Using an exponent less than one for the pow() function should give the image a convex look.
Then when you render your bitmap, or copy it to another bitmap, use the values in distortionU and distortionV to distort your bitmap, e.g.:
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
// int pixelColor = bitmap.getPixel(x, y); // gets undistorted value
int pixelColor = bitmap.getPixel(distortionU[x][y], distortionV[x][y]); // gets distorted value
canvas.drawPixel(x + offsetX, y + offsetY, pixelColor);
}
}
I don't know what your actual function for drawing a pixel to the canvas is called, the above is just pseudo-code.
I am trying to create a Java function to make a bulging effect on an image by shifting the pixel to the relative centre of the image. I first take the (x,y) coordinate of the pixel, find the relative shift, x = x-(x/2) and convert it to polar form [rcos(a), rsin(a)]. r is found by: r = Math.sqrt(xx + yy). Angle a is found using Math.atan2(y/x). New radius (r') is found using r' = 2r^1.5 . However, the new x,y values from [rcos(a), rsin(a)] exceed the dimensions of the image, and errors occur.
Am I making a fundamental mistake?
public void bulge()
{
double xval, yval = 0;
//loop through the columns
for(int x = 0; x < this.getWidth(); x++)
{
//loop through the rows
for(int y = 0; y < this.getHeight(); y++)
{
int redValue, greenValue, blueValue = 0;
double newRadius = 0;
Pixel pixel = this.getPixel(x,y);
redValue = pixel.getRed();
greenValue = pixel.getGreen();
blueValue = pixel.getBlue();
xval = x - (x/2);
yval = y - (y/2);
double radius = Math.sqrt(xval*xval + yval*yval);
double angle = Math.atan2(yval, xval);
newRadius = 2*(Math.pow(radius,1.5));
xval = (int)(newRadius*Math.sin(angle));
yval = (int)(newRadius*Math.cos(angle));
Pixel pixelNewPos = this.getPixel((int)xval, (int)yval);
pixelNewPos.setColor(new Color(redValue, greenValue, blueValue));
}
}
}
It's a lot easier to successfully apply a transform from source image A to destination image B by doing the reverse transform from pixels in image B to pixels in image A.
By this I mean for each pixel in destination image B, determine the pixel or pixels in source image A that contribute to the color. That way you don't end up with a whole bunch of pixels in the target image that haven't been touched.
As an example using a linear scaling operation by 2, a simple implementation might look like this:
for (int x = 0; x < sourceWidth; ++x) {
for (int y = 0; y < sourceHeight; ++y) {
Pixel sourcePixel = sourceImage.getPixel(x, y);
int destPixelX = x * 2;
int destPixelY = y * 2;
destImage.setPixel(destPixelX, destPixelY, sourcePixel);
}
}
It should be clear from this code that pixels with either odd numbers X or Y values will not be set in the destination image.
A better way would be something like this:
for (int x = 0; x < destWidth; ++x) {
for (int y = 0; y < destHeight; ++y) {
int sourcePixelX = x / 2;
int sourcePixelY = y / 2;
Pixel sourcePixel = sourceImage.getPixel(sourcePixelX, sourcePixelY);
destImage.setPixel(x, y, sourcePixel);
}
}
Although this is not a good image upscaling algorithm in general, it does show how to make sure that all the pixels in your target image are set.
Am I making a fundamental mistake?
At a conceptual level, yes. Your algorithm is taking a rectangular image and moving the location of the pixels to give a larger, non-rectagular image. Obviously that won't fit into your original rectangle.
So you either need to clip (i.e. discard) the pixels that fall outside of the rectangle, or you need to use a larger rectangle so that all of the mapped pixels fall inside it.
In the latter case, there will be gaps around the edges ...if your transformation is doing what you claim it does. A non-linear transformation of a rectangle is not going to have straight sides.
// load pixels into an image
this.image = new BufferedImage(this.width,
this.height,
BufferedImage.TYPE_INT_RGB);
// get actual image data for easier pixel loading
byte[] iData = new byte[this.size - 54];
for(int i = 0; i < this.size - 54; i++) {
iData[i] = this.data[i+54];
}
// start from bottom row
for(int y = this.height-1; y >= 0; y--) {
for(int x = 0; x < this.width; x++) {
int index = (this.width*y + x) * 3;
int b = iData[index];
int g = iData[index+1];
int r = iData[index+2];
//System.out.format("R: %s\nG: %s\nB: %s\n\n", r, g, b);
// merge rgb values to single int
int rgb = ((r&0x0ff)<<16)|((g&0x0ff)<<8)|(b&0x0ff);
// build image from bottom up
this.image.setRGB(x, this.height-1-y, rgb);
}
}
I'm reading RGB values from a Bitmap. My iData byte array is correct, as I've checked it against a hex editor. However, when I run this loop, my output image is warped (see picture). I've been wracking my brain for hours trying to fix this, why is it happening?
Input image is a canadian flag.
output image:
I wasn't accounting for the zero-byte padding on the width, for which the formula is
WidthWithPadding = ceiling((w*colorDepth)/32)*32.
Images usually have width, depth (bits per pixel), and stride. Usually the stride is not just width*depth, but padded by some amount (often used to align each row to a 16-bit boundary for memory access reasons).
Your code does not appear to be accounting for the stride or any padding, which explains the offset as you go through the image. I'm surprised the colors don't get switched (suggesting the padding is the same size as a pixel), but those undefined values are causing the black stripe through the image.
Rather than multiplying (this.width*y + x) * 3 to get your index, you should use (this.stride*y + x) * 3 with the appropriate value for stride. The BufferedImage class doesn't seem to provide that in any obvious fashion, so you need to calculate or fetch that otherwise.
The general issue that's happening is that your code has conflated stride (distance between rows in bytes) with width (number of bytes in a row); in aligned data formats these are not necessarily the same thing. The general result of this is getting the sort of skew you're seeing, as the second row starts out on the last byte(s) of the first row, the third row starts out on the second-to-last byte(s) of the second row, and so on.
The first thing you need to do is calculate the stride of the data, and use that as the multiplication factor for the row. Here is a simplified loop that also is somewhat more efficient than multiplying on a per-pixel basis:
int stride = (this.width + 3) & ~3; // rounds up to the nearest multiple of 4
for (int y = 0; y < this.height; y++) {
int idx = y*stride;
for (int x = 0; x < this.width; x++) {
int b = iData[idx++];
int g = iData[idx++];
int a = iData[idx++];
// etc.
}
}
Note that the rounding trick above ((w + a - 1) & ~(a - 1)) only works for power-of-two values; a more general form of the rounding trick is:
int stride = (width + stride - 1)/stride*stride;
although it's very uncommon to find alignments that aren't a power of 2 in the first place.
how to implements low pass filter, i have:
BufferedImage img;
int width = img.getWidth();
int height = img.getHeight();
int L = (int) (f * Math.min(width, height));
for (int y = 0 ; y < height ; y++) {
for (int x = 0 ; x < width ; x++) {
if (x >= width / 2 - L && x <= width / 2 + L && y >= -L + height / 2 && y <= L + height / 2) {
img.setRGB(x, y, 0);
}
else {}
}
}
but firstly i should transform image but how?
Your code as written would just set the pixels near the edge of the image to black. If you did this in the frequency domain you would have a low-pass filter, because the pixels near the edge of the image would be the high-frequency components and so setting them to 0 would leave just the low-frequency components. To operate in the frequency domain you need to apply a Fourier transform. However you need to take care about where in the transformed image the low-frequency components end up, as different implementations of a Fourier transform may put the low-frequency components either in the center of the transformed image or at one of the corners.