Low pass filter processing image java - java

how to implements low pass filter, i have:
BufferedImage img;
int width = img.getWidth();
int height = img.getHeight();
int L = (int) (f * Math.min(width, height));
for (int y = 0 ; y < height ; y++) {
for (int x = 0 ; x < width ; x++) {
if (x >= width / 2 - L && x <= width / 2 + L && y >= -L + height / 2 && y <= L + height / 2) {
img.setRGB(x, y, 0);
}
else {}
}
}
but firstly i should transform image but how?

Your code as written would just set the pixels near the edge of the image to black. If you did this in the frequency domain you would have a low-pass filter, because the pixels near the edge of the image would be the high-frequency components and so setting them to 0 would leave just the low-frequency components. To operate in the frequency domain you need to apply a Fourier transform. However you need to take care about where in the transformed image the low-frequency components end up, as different implementations of a Fourier transform may put the low-frequency components either in the center of the transformed image or at one of the corners.

Related

Android Java - How to remove image noise in faster way?

I have these code to reduce image noise:
for (int x = 0; x < bitmap.getWidth(); x++) {
for (int y = 0; y < bitmap.getHeight(); y++) {
// get one pixel color
int pixel = processedBitmap.getPixel(x, y);
// retrieve color of RGB
int R = Color.red(pixel);
int G = Color.green(pixel);
int B = Color.blue(pixel);
// convert into single value
R = G = B = (int) (0.299 * R + 0.587 * G + 0.114 * B);
// convert to black and white + remove noise
if (R > 162 && G > 162 && B > 162)
bitmap.setPixel(x, y, Color.WHITE);
else if (R < 162 && G < 162 && B < 162)
bitmap.setPixel(x, y, Color.BLACK);
}
}
But the time takes very long to generate the outcome. Is there any other way to optimize these code to make it faster?
Don't use getPixel. Get the image data as an array and use math to access the correct pixel. Write the math such that the fewest multiplications possible are used. Same for setPixel.
Don't use Color.red(), Color.green(), etc. Use masking, its more efficient than a function call.
Even better, drop into the NDK and do this in C. Image manipulation in Java is generally less than optimal.

Render gray-scale pixels based on an array of values between 0 and 1

I am trying to render pixels from an array.
I have an array of data that looks like this (except much larger). From this array I would like to somehow render it so that each number in the array corresponds to a pixel with a shade of gray based on the number value. (0.0 would be white, and 1.0 would be black)
I don't know where to start.
For the array you have given; If you know the width and height of the image you want rendered you can do this:
int indx = 0;
for(int x = 0; x < width; x++) {
for(int y = 0; y < height; y++) {
glColor3d(data[indx],data[indx],data[indx]);
//drawing of it goes here; assuming glVertex2d(x,y);
indx++;
}
}
For this to work it should be known that width*height < data.length. Increment index for each pixel drawn to go to the next number in the array and draw it accordingly.
Modify the x and y so it draws where you want. Say if locX = locY = 10 then depending on the viewport you should have already set up, then the image will start rendering 10px away from (probably) either the top left or bottom left corner. This part is simple maths if you have already started to learn how to draw in OpenGl and/or LWJGL.
int locX, locY;
int indx = 0;
for(int x = 0; x < width; x++) {
for(int y = 0; y < height; y++) {
glColor3d(data[indx],data[indx],data[indx]);
glVertex2d(locX + x, locY + y);
indx++;
}
}
Hope this helps.

Making an Image Concave in Java

I had a quick question, and wondered if anyone had any ideas or libraries I could use for this. I am making a java game, and need to make 2d images concave. The problem is, 1: I don't know how to make an image concave. 2: I need the concave effect to be somewhat of a post process, think Oculus Rift. Everything is normal, but the camera of the player distorts the normal 2d images to look 3d. I am a Sophmore, so I don't know very complex math to accomplish this.
Thanks,
-Blue
If you're not using any 3D libraries or anything like that, just implement it as a simple 2D distortion. It doesn't have to be 100% mathematically correct as long as it looks OK. You can create a couple of arrays to store the distorted texture co-ordinates for your bitmap, which means you can pre-calculate the distortion once (which will be slow but only happens once) and then render multiple times using the pre-calculated values (which will be faster).
Here's a simple function using a power formula to generate a distortion field. There's nothing 3D about it, it just sucks in the center of the image to give a concave look:
int distortionU[][];
int distortionV[][];
public void computeDistortion(int width, int height)
{
// this will be really slow but you only have to call it once:
int halfWidth = width / 2;
int halfHeight = height / 2;
// work out the distance from the center in the corners:
double maxDistance = Math.sqrt((double)((halfWidth * halfWidth) + (halfHeight * halfHeight)));
// allocate arrays to store the distorted co-ordinates:
distortionU = new int[width][height];
distortionV = new int[width][height];
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
// work out the distortion at this pixel:
// find distance from the center:
int xDiff = x - halfWidth;
int yDiff = y - halfHeight;
double distance = Math.sqrt((double)((xDiff * xDiff) + (yDiff * yDiff)));
// distort the distance using a power function
double invDistance = 1.0 - (distance / maxDistance);
double distortedDistance = (1.0 - Math.pow(invDistance, 1.7)) * maxDistance;
distortedDistance *= 0.7; // zoom in a little bit to avoid gaps at the edges
// work out how much to multiply xDiff and yDiff by:
double distortionFactor = distortedDistance / distance;
xDiff = (int)((double)xDiff * distortionFactor);
yDiff = (int)((double)yDiff * distortionFactor);
// save the distorted co-ordinates
distortionU[x][y] = halfWidth + xDiff;
distortionV[x][y] = halfHeight + yDiff;
// clamp
if(distortionU[x][y] < 0)
distortionU[x][y] = 0;
if(distortionU[x][y] >= width)
distortionU[x][y] = width - 1;
if(distortionV[x][y] < 0)
distortionV[x][y] = 0;
if(distortionV[x][y] >= height)
distortionV[x][y] = height - 1;
}
}
}
Call it once passing the size of the bitmap that you want to distort. You can play around with the values or use a totally different formula to get the effect you want. Using an exponent less than one for the pow() function should give the image a convex look.
Then when you render your bitmap, or copy it to another bitmap, use the values in distortionU and distortionV to distort your bitmap, e.g.:
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
// int pixelColor = bitmap.getPixel(x, y); // gets undistorted value
int pixelColor = bitmap.getPixel(distortionU[x][y], distortionV[x][y]); // gets distorted value
canvas.drawPixel(x + offsetX, y + offsetY, pixelColor);
}
}
I don't know what your actual function for drawing a pixel to the canvas is called, the above is just pseudo-code.

How to create bulging effect is Java?

I am trying to create a Java function to make a bulging effect on an image by shifting the pixel to the relative centre of the image. I first take the (x,y) coordinate of the pixel, find the relative shift, x = x-(x/2) and convert it to polar form [rcos(a), rsin(a)]. r is found by: r = Math.sqrt(xx + yy). Angle a is found using Math.atan2(y/x). New radius (r') is found using r' = 2r^1.5 . However, the new x,y values from [rcos(a), rsin(a)] exceed the dimensions of the image, and errors occur.
Am I making a fundamental mistake?
public void bulge()
{
double xval, yval = 0;
//loop through the columns
for(int x = 0; x < this.getWidth(); x++)
{
//loop through the rows
for(int y = 0; y < this.getHeight(); y++)
{
int redValue, greenValue, blueValue = 0;
double newRadius = 0;
Pixel pixel = this.getPixel(x,y);
redValue = pixel.getRed();
greenValue = pixel.getGreen();
blueValue = pixel.getBlue();
xval = x - (x/2);
yval = y - (y/2);
double radius = Math.sqrt(xval*xval + yval*yval);
double angle = Math.atan2(yval, xval);
newRadius = 2*(Math.pow(radius,1.5));
xval = (int)(newRadius*Math.sin(angle));
yval = (int)(newRadius*Math.cos(angle));
Pixel pixelNewPos = this.getPixel((int)xval, (int)yval);
pixelNewPos.setColor(new Color(redValue, greenValue, blueValue));
}
}
}
It's a lot easier to successfully apply a transform from source image A to destination image B by doing the reverse transform from pixels in image B to pixels in image A.
By this I mean for each pixel in destination image B, determine the pixel or pixels in source image A that contribute to the color. That way you don't end up with a whole bunch of pixels in the target image that haven't been touched.
As an example using a linear scaling operation by 2, a simple implementation might look like this:
for (int x = 0; x < sourceWidth; ++x) {
for (int y = 0; y < sourceHeight; ++y) {
Pixel sourcePixel = sourceImage.getPixel(x, y);
int destPixelX = x * 2;
int destPixelY = y * 2;
destImage.setPixel(destPixelX, destPixelY, sourcePixel);
}
}
It should be clear from this code that pixels with either odd numbers X or Y values will not be set in the destination image.
A better way would be something like this:
for (int x = 0; x < destWidth; ++x) {
for (int y = 0; y < destHeight; ++y) {
int sourcePixelX = x / 2;
int sourcePixelY = y / 2;
Pixel sourcePixel = sourceImage.getPixel(sourcePixelX, sourcePixelY);
destImage.setPixel(x, y, sourcePixel);
}
}
Although this is not a good image upscaling algorithm in general, it does show how to make sure that all the pixels in your target image are set.
Am I making a fundamental mistake?
At a conceptual level, yes. Your algorithm is taking a rectangular image and moving the location of the pixels to give a larger, non-rectagular image. Obviously that won't fit into your original rectangle.
So you either need to clip (i.e. discard) the pixels that fall outside of the rectangle, or you need to use a larger rectangle so that all of the mapped pixels fall inside it.
In the latter case, there will be gaps around the edges ...if your transformation is doing what you claim it does. A non-linear transformation of a rectangle is not going to have straight sides.

Image interpolation - nearest neighbor (Processing)

I've been having trouble with an image interpolation method in Processing. This is the code I've come up with and I'm aware that it will throw an out of bounds exception since the outer loop goes further than the original image but how can I fix that?
PImage nearestneighbor (PImage o, float sf)
{
PImage out = createImage((int)(sf*o.width),(int)(sf*o.height),RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < sf*o.height; i++)
{
for (int j = 0; j < sf*o.width; j++)
{
int y = round((o.width*i)/sf);
int x = round(j / sf);
out.pixels[(int)((sf*o.width*i)+j)] = o.pixels[(y+x)];
}
}
out.updatePixels();
return out;
}
My idea was to divide both components that represent the point in the scaled image by the scale factor and round it in order to obtain the nearest neighbor.
For getting rid of the IndexOutOfBoundsException try caching the result of (int)(sf*o.width) and (int)(sf*o.height).
Additionally you might want to make sure that x and y don't leave the bounds, e.g. by using Math.min(...) and Math.max(...).
Finally, it should be int y = round((i / sf) * o.width; since you want to get the pixel in the original scale and then muliply with the original width. Example: Assume a 100x100 image and a scaling factor of 1.2. The scaled height would be 120 and thus the highest value for i would be 119. Now, round((119 * 100) / 1.2) yields round(9916.66) = 9917. On the other hand round(119 / 1.2) * 100 yields round(99.16) * 100 = 9900 - you have a 17 pixel difference here.
Btw, the variable name y might be misleading here, since its not the y coordinate but the index of the pixel at the coordinates (0,y), i.e. the first pixel at height y.
Thus your code might look like this:
int scaledWidth = (int)(sf*o.width);
int scaledHeight = (int)(sf*o.height);
PImage out = createImage(scaledWidth, scaledHeight, RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < scaledHeight; i++) {
for (int j = 0; j < scaledWidth; j++) {
int y = Math.min( round(i / sf), o.height ) * o.width;
int x = Math.min( round(j / sf), o.width );
out.pixels[(int)((scaledWidth * i) + j)] = o.pixels[(y + x)];
}
}

Categories

Resources