How to create bulging effect is Java? - java

I am trying to create a Java function to make a bulging effect on an image by shifting the pixel to the relative centre of the image. I first take the (x,y) coordinate of the pixel, find the relative shift, x = x-(x/2) and convert it to polar form [rcos(a), rsin(a)]. r is found by: r = Math.sqrt(xx + yy). Angle a is found using Math.atan2(y/x). New radius (r') is found using r' = 2r^1.5 . However, the new x,y values from [rcos(a), rsin(a)] exceed the dimensions of the image, and errors occur.
Am I making a fundamental mistake?
public void bulge()
{
double xval, yval = 0;
//loop through the columns
for(int x = 0; x < this.getWidth(); x++)
{
//loop through the rows
for(int y = 0; y < this.getHeight(); y++)
{
int redValue, greenValue, blueValue = 0;
double newRadius = 0;
Pixel pixel = this.getPixel(x,y);
redValue = pixel.getRed();
greenValue = pixel.getGreen();
blueValue = pixel.getBlue();
xval = x - (x/2);
yval = y - (y/2);
double radius = Math.sqrt(xval*xval + yval*yval);
double angle = Math.atan2(yval, xval);
newRadius = 2*(Math.pow(radius,1.5));
xval = (int)(newRadius*Math.sin(angle));
yval = (int)(newRadius*Math.cos(angle));
Pixel pixelNewPos = this.getPixel((int)xval, (int)yval);
pixelNewPos.setColor(new Color(redValue, greenValue, blueValue));
}
}
}

It's a lot easier to successfully apply a transform from source image A to destination image B by doing the reverse transform from pixels in image B to pixels in image A.
By this I mean for each pixel in destination image B, determine the pixel or pixels in source image A that contribute to the color. That way you don't end up with a whole bunch of pixels in the target image that haven't been touched.
As an example using a linear scaling operation by 2, a simple implementation might look like this:
for (int x = 0; x < sourceWidth; ++x) {
for (int y = 0; y < sourceHeight; ++y) {
Pixel sourcePixel = sourceImage.getPixel(x, y);
int destPixelX = x * 2;
int destPixelY = y * 2;
destImage.setPixel(destPixelX, destPixelY, sourcePixel);
}
}
It should be clear from this code that pixels with either odd numbers X or Y values will not be set in the destination image.
A better way would be something like this:
for (int x = 0; x < destWidth; ++x) {
for (int y = 0; y < destHeight; ++y) {
int sourcePixelX = x / 2;
int sourcePixelY = y / 2;
Pixel sourcePixel = sourceImage.getPixel(sourcePixelX, sourcePixelY);
destImage.setPixel(x, y, sourcePixel);
}
}
Although this is not a good image upscaling algorithm in general, it does show how to make sure that all the pixels in your target image are set.

Am I making a fundamental mistake?
At a conceptual level, yes. Your algorithm is taking a rectangular image and moving the location of the pixels to give a larger, non-rectagular image. Obviously that won't fit into your original rectangle.
So you either need to clip (i.e. discard) the pixels that fall outside of the rectangle, or you need to use a larger rectangle so that all of the mapped pixels fall inside it.
In the latter case, there will be gaps around the edges ...if your transformation is doing what you claim it does. A non-linear transformation of a rectangle is not going to have straight sides.

Related

Incomplete Light Circle

I've made a lighting engine which allows for shadows. It works on a grid system where each pixel has a light value stored as an integer in an array. Here is a demonstration of what it looks like:
The shadow and the actual pixel coloring works fine. The only problem is the unlit pixels further out in the circle, which for some reason makes a very interesting pattern(you may need to zoom into the image to see it). Here is the code which draws the light.
public void implementLighting(){
lightLevels = new int[Game.WIDTH*Game.HEIGHT];
//Resets the light level map to replace it with the new lighting
for(LightSource lightSource : lights) {
//Iterates through all light sources in the world
double circumference = (Math.PI * lightSource.getRadius() * 2),
segmentToDegrees = 360 / circumference, distanceToLighting = lightSource.getLightLevel() / lightSource.getRadius();
//Degrades in brightness further out
for (double i = 0; i < circumference; i++) {
//Draws a ray to every outer pixel of the light source's reach
double radians = Math.toRadians(i*segmentToDegrees),
sine = Math.sin(radians),
cosine = Math.cos(radians),
x = lightSource.getVector().getScrX() + cosine,
y = lightSource.getVector().getScrY() + sine,
nextLit = 0;
for (double j = 0; j < lightSource.getRadius(); j++) {
int lighting = (int)(distanceToLighting * (lightSource.getRadius() - j));
double pixelHeight = super.getPixelHeight((int) x, (int)y);
if((int)j==(int)nextLit) addLighting((int)x, (int)y, lighting);
//If light is projected to have hit the pixel
if(pixelHeight > 0) {
double slope = (lightSource.getEmittingHeight() - pixelHeight) / (0 - j);
nextLit = (-lightSource.getRadius()) / slope;
/*If something is blocking it
* Using heightmap and emitting height, project where next lit pixel will be
*/
}
else nextLit++;
//Advances the light by one pixel if nothing is blocking it
x += cosine;
y += sine;
}
}
}
lights = new ArrayList<>();
}
The algorithm i'm using should account for every pixel within the radius of the light source not blocked by an object, so i'm not sure why some of the outer pixels are missing.
Thanks.
EDIT: What I found is, the unlit pixels within the radius of the light source are actually just dimmer than the other ones. This is a consequence of the addLighting method not simply changing the lighting of a pixel, but adding it to the value that's already there. This means that the "unlit" are the ones only being added to once.
To test this hypothesis, I made a program that draws a circle in the same way it is done to generate lighting. Here is the code that draws the circle:
BufferedImage image = new BufferedImage(WIDTH, HEIGHT,
BufferedImage.TYPE_INT_RGB);
Graphics g = image.getGraphics();
g.setColor(Color.white);
g.fillRect(0, 0, WIDTH, HEIGHT);
double radius = 100,
x = (WIDTH-radius)/2,
y = (HEIGHT-radius)/2,
circumference = Math.PI*2*radius,
segmentToRadians = (360*Math.PI)/(circumference*180);
for(double i = 0; i < circumference; i++){
double radians = segmentToRadians*i,
cosine = Math.cos(radians),
sine = Math.sin(radians),
xPos = x + cosine,
yPos = y + sine;
for (int j = 0; j < radius; j++) {
if(xPos >= 0 && xPos < WIDTH && yPos >= 0 && yPos < HEIGHT) {
int rgb = image.getRGB((int) Math.round(xPos), (int) Math.round(yPos));
if (rgb == Color.white.getRGB()) image.setRGB((int) Math.round(xPos), (int) Math.round(yPos), 0);
else image.setRGB((int) Math.round(xPos), (int) Math.round(yPos), Color.red.getRGB());
}
xPos += cosine;
yPos += sine;
}
}
Here is the result:
The white pixels are pixels not colored
The black pixels are pixels colored once
The red pixels are pixels colored 2 or more times
So its actually even worse than I originally proposed. It's a combination of unlit pixels, and pixels lit multiple times.
You should iterate over real image pixels, not polar grid points.
So correct pixel-walking code might look as
for(int x = 0; x < WIDTH; ++x) {
for(int y = 0; y < HEIGHT; ++y) {
double distance = Math.hypot(x - xCenter, y - yCenter);
if(distance <= radius) {
image.setRGB(x, y, YOUR_CODE_HERE);
}
}
}
Of course this snippet can be optimized choosing good filling polygon instead of rectangle.
This can be solved by anti-aliasing.
Because you push float-coordinate information and compress it , some lossy sampling occur.
double x,y ------(snap)---> lightLevels[int ?][int ?]
To totally solve that problem, you have to draw transparent pixel (i.e. those that less lit) around that line with a correct light intensity. It is quite hard to calculate though. (see https://en.wikipedia.org/wiki/Spatial_anti-aliasing)
Workaround
An easier (but dirty) approach is to draw another transparent thicker line over the line you draw, and tune the intensity as needed.
Or just make your line thicker i.e. using bigger blurry point but less lit to compensate.
It should make the glitch less obvious.
(see algorithm at how do I create a line of arbitrary thickness using Bresenham?)
An even better approach is to change your drawing approach.
Drawing each line manually is very expensive.
You may draw a circle using 2D sprite.
However, it is not applicable if you really want the ray-cast like in this image : http://www.iforce2d.net/image/explosions-raycast1.png
Split graphic - gameplay
For best performance and appearance, you may prefer GPU to render instead, but use more rough algorithm to do ray-cast for the gameplay.
Nonetheless, it is a very complex topic. (e.g. http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/ )
Reference
Here are more information:
http://what-when-how.com/opengl-programming-guide/antialiasing-blending-antialiasing-fog-and-polygon-offset-opengl-programming/ (opengl-antialias with image)
DirectX11 Non-Solid wireframe (a related question about directx11 with image)

Render gray-scale pixels based on an array of values between 0 and 1

I am trying to render pixels from an array.
I have an array of data that looks like this (except much larger). From this array I would like to somehow render it so that each number in the array corresponds to a pixel with a shade of gray based on the number value. (0.0 would be white, and 1.0 would be black)
I don't know where to start.
For the array you have given; If you know the width and height of the image you want rendered you can do this:
int indx = 0;
for(int x = 0; x < width; x++) {
for(int y = 0; y < height; y++) {
glColor3d(data[indx],data[indx],data[indx]);
//drawing of it goes here; assuming glVertex2d(x,y);
indx++;
}
}
For this to work it should be known that width*height < data.length. Increment index for each pixel drawn to go to the next number in the array and draw it accordingly.
Modify the x and y so it draws where you want. Say if locX = locY = 10 then depending on the viewport you should have already set up, then the image will start rendering 10px away from (probably) either the top left or bottom left corner. This part is simple maths if you have already started to learn how to draw in OpenGl and/or LWJGL.
int locX, locY;
int indx = 0;
for(int x = 0; x < width; x++) {
for(int y = 0; y < height; y++) {
glColor3d(data[indx],data[indx],data[indx]);
glVertex2d(locX + x, locY + y);
indx++;
}
}
Hope this helps.

Making an Image Concave in Java

I had a quick question, and wondered if anyone had any ideas or libraries I could use for this. I am making a java game, and need to make 2d images concave. The problem is, 1: I don't know how to make an image concave. 2: I need the concave effect to be somewhat of a post process, think Oculus Rift. Everything is normal, but the camera of the player distorts the normal 2d images to look 3d. I am a Sophmore, so I don't know very complex math to accomplish this.
Thanks,
-Blue
If you're not using any 3D libraries or anything like that, just implement it as a simple 2D distortion. It doesn't have to be 100% mathematically correct as long as it looks OK. You can create a couple of arrays to store the distorted texture co-ordinates for your bitmap, which means you can pre-calculate the distortion once (which will be slow but only happens once) and then render multiple times using the pre-calculated values (which will be faster).
Here's a simple function using a power formula to generate a distortion field. There's nothing 3D about it, it just sucks in the center of the image to give a concave look:
int distortionU[][];
int distortionV[][];
public void computeDistortion(int width, int height)
{
// this will be really slow but you only have to call it once:
int halfWidth = width / 2;
int halfHeight = height / 2;
// work out the distance from the center in the corners:
double maxDistance = Math.sqrt((double)((halfWidth * halfWidth) + (halfHeight * halfHeight)));
// allocate arrays to store the distorted co-ordinates:
distortionU = new int[width][height];
distortionV = new int[width][height];
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
// work out the distortion at this pixel:
// find distance from the center:
int xDiff = x - halfWidth;
int yDiff = y - halfHeight;
double distance = Math.sqrt((double)((xDiff * xDiff) + (yDiff * yDiff)));
// distort the distance using a power function
double invDistance = 1.0 - (distance / maxDistance);
double distortedDistance = (1.0 - Math.pow(invDistance, 1.7)) * maxDistance;
distortedDistance *= 0.7; // zoom in a little bit to avoid gaps at the edges
// work out how much to multiply xDiff and yDiff by:
double distortionFactor = distortedDistance / distance;
xDiff = (int)((double)xDiff * distortionFactor);
yDiff = (int)((double)yDiff * distortionFactor);
// save the distorted co-ordinates
distortionU[x][y] = halfWidth + xDiff;
distortionV[x][y] = halfHeight + yDiff;
// clamp
if(distortionU[x][y] < 0)
distortionU[x][y] = 0;
if(distortionU[x][y] >= width)
distortionU[x][y] = width - 1;
if(distortionV[x][y] < 0)
distortionV[x][y] = 0;
if(distortionV[x][y] >= height)
distortionV[x][y] = height - 1;
}
}
}
Call it once passing the size of the bitmap that you want to distort. You can play around with the values or use a totally different formula to get the effect you want. Using an exponent less than one for the pow() function should give the image a convex look.
Then when you render your bitmap, or copy it to another bitmap, use the values in distortionU and distortionV to distort your bitmap, e.g.:
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
// int pixelColor = bitmap.getPixel(x, y); // gets undistorted value
int pixelColor = bitmap.getPixel(distortionU[x][y], distortionV[x][y]); // gets distorted value
canvas.drawPixel(x + offsetX, y + offsetY, pixelColor);
}
}
I don't know what your actual function for drawing a pixel to the canvas is called, the above is just pseudo-code.

parts of the images in rectangle shape

We are trying to get only the portion of the image out of the captured image. But in java we only get subimage in rectangular form using image.getImage(x,y,width, height). Let say if i virutally split the image as 10 parts as shown below. How can i able to extract only 1,2,4,6,8,9,10 out of it as show in the second image using native java very without consuming too many resources and time.
Update
Below is the sample code
for (int x = 0; x < columns; x++) {
for (int y = 0; y < rows; y++) {
imagePart = img.getSubimage(x * this.smallWidth, y
* this.smallHeight, this.smallWidth,
this.smallHeight);
if (!ifSelectedPart(imagePart)) {
smallImages[x][y] = imagePart;
}
else {
smallImages[x][y] = fillwithAlpha();
}
}
createImage(smallImages[][])
If these rectangles are all the same size you can treat the image as a grid and calculate what region of the image you need with a little math.
int numberColumns = 2;
int numberRows = 5;
public Rectangle getSubregion(int row, int column, int imgWidth, int imgHeight){
int cellWidth = imgWidth / numberColumns;
int cellHeight = imgHeight / numberRows;
return new Rectangle(column*cellWidth, row*cellHeight,cellWidth, cellHeight);
}
//usage
Rectangle cellOne = getSubregion(0, 0, img.getWidth(),img.getHeight());
Then just render each of those subregions to a new image in memory.
Images are by their nature rectangular. Perhaps you wish to draw over the image with 0 alpha composite color to cover up the region that you don't want to see. Either that or create a grid of rectangular sub-images, and keep the ones from the grid that you want to display.

Image interpolation - nearest neighbor (Processing)

I've been having trouble with an image interpolation method in Processing. This is the code I've come up with and I'm aware that it will throw an out of bounds exception since the outer loop goes further than the original image but how can I fix that?
PImage nearestneighbor (PImage o, float sf)
{
PImage out = createImage((int)(sf*o.width),(int)(sf*o.height),RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < sf*o.height; i++)
{
for (int j = 0; j < sf*o.width; j++)
{
int y = round((o.width*i)/sf);
int x = round(j / sf);
out.pixels[(int)((sf*o.width*i)+j)] = o.pixels[(y+x)];
}
}
out.updatePixels();
return out;
}
My idea was to divide both components that represent the point in the scaled image by the scale factor and round it in order to obtain the nearest neighbor.
For getting rid of the IndexOutOfBoundsException try caching the result of (int)(sf*o.width) and (int)(sf*o.height).
Additionally you might want to make sure that x and y don't leave the bounds, e.g. by using Math.min(...) and Math.max(...).
Finally, it should be int y = round((i / sf) * o.width; since you want to get the pixel in the original scale and then muliply with the original width. Example: Assume a 100x100 image and a scaling factor of 1.2. The scaled height would be 120 and thus the highest value for i would be 119. Now, round((119 * 100) / 1.2) yields round(9916.66) = 9917. On the other hand round(119 / 1.2) * 100 yields round(99.16) * 100 = 9900 - you have a 17 pixel difference here.
Btw, the variable name y might be misleading here, since its not the y coordinate but the index of the pixel at the coordinates (0,y), i.e. the first pixel at height y.
Thus your code might look like this:
int scaledWidth = (int)(sf*o.width);
int scaledHeight = (int)(sf*o.height);
PImage out = createImage(scaledWidth, scaledHeight, RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < scaledHeight; i++) {
for (int j = 0; j < scaledWidth; j++) {
int y = Math.min( round(i / sf), o.height ) * o.width;
int x = Math.min( round(j / sf), o.width );
out.pixels[(int)((scaledWidth * i) + j)] = o.pixels[(y + x)];
}
}

Categories

Resources