BufferedImage setRGB strange result - java

I've try to rotate an Image in Java using setRGB and BufferedImage, but I get a strange result. Has anyone any idea why?
BufferedImage pic1 = ImageIO.read(new File("Images/Input-1.bmp"));
int width = pic1.getWidth(null);
int height = pic1.getHeight(null);
double angle = Math.toRadians(90);
double sin = Math.sin(angle);
double cos = Math.cos(angle);
double x0 = 0.5 * (width - 1); // point to rotate about
double y0 = 0.5 * (height - 1); // center of image
BufferedImage pic2 = pic1;
// rotation
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
double a = x - x0;
double b = y - y0;
int xx = (int) (+a * cos - b * sin + x0);
int yy = (int) (+a * sin + b * cos + y0);
if (xx >= 0 && xx < width && yy >= 0 && yy < height) {
pic2.setRGB(x, y, pic1.getRGB(xx, yy));
}
}
}
ImageIO.write(pic2, "bmp", new File("Images/Output2.bmp"));
In the LEFT side is the original picture and in the RIGHT side it's my result. Have anyone any idea how can I fix it?
Thanks for help.

The problem is that you're using the same image as input and output:
BufferedImage pic2 = pic1;
You must create another image for pic2 and then do the rotation, copying pixels from Image1 to Image2.
Note, however, that using getRGB and setRGB it terribly slow. Its 100 times faster if you manipulate the pixels directly.

Related

I try to rotat without lib but it make black points in picture

I am trying to rotate image without standard method , making color array and manipulate it, but when I invoke the, rotation I get black points (look the picture)
Here is my code, colScaled is the picture I am trying to convert to an array:
public void arrays() {
colScaled = zoom2();
int j = 0;
int i = 0;
angel = Integer.parseInt(this.mn.jTextField1.getText());
float degree = (float) Math.toRadians(angel);
float cos = (float) Math.cos(degree);
float sin = (float) Math.sin(degree);
int W = Math.round(colScaled[0].length * Math.abs(sin) + colScaled.length * Math.abs(cos));
int H = Math.round(colScaled[0].length * Math.abs(cos) + colScaled.length * Math.abs(sin));
int x;
int y;
int xn = (int) W / 2;
int yn = (int) H / 2;
int hw = (int) colScaled.length / 2;
int hh = (int) colScaled[0].length / 2;
BufferedImage image = new BufferedImage(W + 1, H + 1, im.getType());
for (i = 0; i < colScaled.length; i++) {
for (j = 0; j < colScaled[0].length; j++) {
x = Math.round((i - hw) * cos - (j - hh) * sin + xn);
y = Math.round((i - hw) * sin + (j - hh) * cos + yn);
image.setRGB(x, y, colScaled[i][j]);
}
}
ImageIcon ico = new ImageIcon(image);
this.mn.jLabel1.setIcon(ico);
}
Notice this block in your code :-
for (i = 0; i < colScaled.length; i++) {
for (j = 0; j < colScaled[0].length; j++) {
x = Math.round((i - hw) * cos - (j - hh) * sin + xn);
y = Math.round((i - hw) * sin + (j - hh) * cos + yn);
image.setRGB(x, y, colScaled[i][j]);
}
}
The x and y is pixel coordinate in source image (colScaled).
The objective of this code is to fill all pixels in destination image (image).
In your loop, there is no guarantee that all pixels in the destination image will be filled, even it is in the rectangle zone.
The above image depict the problem.
See? It is possible that the red pixel in the destination image will not be written.
The correct solution is to iterating pixel in destination image, then find a corresponding pixel in source image later.
Edit: After posting, I just saw the Spektre's comment.
I agree, it seems to be a duplicated question. The word "pixel array" made me thing it is not.

Image edge detection algorithm: creating a 2D mesh

Let's first start off with what I am trying to do. I would like to be able to take PNG file with a transparent background and find anywhere from 90 to 360 points along the edge of the subject of the image. Here is a rough example of what I mean. Given this image of Mario and Yoshi:
I want to make a circle that is centered at the center of the image with a diameter slightly larger than the largest side of the image to serve as a reference. Then, I want to go around the circle at set intervals, and trace a line towards the center until it hits a non-transparent pixel. Here is what that would look like:
I have attempted to implement this a few different times, all of which failed, and I was hoping to get some guidance or insight as to what I am doing wrong. Here is an image of the math I am using behind the code (sorry if the quality is not great, I don't have a scanner):
The Line 1 is either the top, bottom, left or right edge of the image, and Line 2 goes through the center of the circle at the given angle. The point where lines 1 and 2 intersect should be on the edge of the image, and is where we should start looking for the edge of the image's subject.
Here is the code that I came up with from this idea. I did it in Java because BufferedImage is really easy to use, but I am going to translate this over to C# (XNA) for the final product.
public class Mesh {
private int angleA, angleB, angleC, angleD;
private BufferedImage image;
private Point center;
public ArrayList<Point> points = new ArrayList<>();
public Mesh(BufferedImage image) {
center = new Point(image.getWidth() / 2, image.getHeight() / 2);
angleA = (int) (Math.atan(center.y / center.x) * (180 / Math.PI));
angleB = 180 - angleA;
angleC = 180 + angleA;
angleD = 360 - angleA;
this.image = image;
for(int angle = 0; angle <= 360; angle+=4){
Point point = getNext(angle);
if(point != null) points.add(point);
}
}
private Point getNext(int angle) {
double radians = angle * Math.PI / 180;
double xStep = Math.cos(radians);
double yStep = Math.sin(radians);
int addX = angle >= 90 && angle <= 270 ? 1 : -1;
int addY = angle >= 0 && angle <= 180 ? 1 : -1;
double x, y;
if (xStep != 0) {
double slope = yStep / xStep;
double intercept = center.y - (slope * center.x);
if (angle >= angleA && angle <= angleB) {
y = 0;
x = -intercept / slope;
} else if (angle > angleB && angle < angleC) {
x = 0;
y = intercept;
} else if (angle >= angleC && angle <= angleD) {
y = image.getHeight() - 1;
x = (y - intercept) / slope;
} else {
x = image.getWidth() - 1;
y = slope * x + intercept;
}
} else {
x = center.x;
y = angle <= angleB ? 0 : image.getHeight();
}
if (x < 0) x = 0;
if (x > image.getWidth() - 1) x = image.getWidth() - 1;
if (y < 0) y = 0;
if (y > image.getHeight() - 1) y = image.getHeight() - 1;
double distance = Math.sqrt(Math.pow(x - center.x, 2) + Math.pow(y - center.y, 2));
double stepSize = Math.sqrt(Math.pow(xStep, 2) + Math.pow(yStep, 2));
int totalSteps = (int) Math.floor(distance / stepSize);
for (int step = 0; step < totalSteps; step++) {
int xVal = (int) x;
int yVal = (int) y;
if(xVal < 0) xVal = 0;
if(xVal > image.getWidth() -1) xVal = image.getWidth() -1;
if(yVal < 0) yVal = 0;
if(yVal > image.getHeight()-1) yVal = image.getHeight() -1;
int pixel = image.getRGB(xVal, yVal);
if ((pixel >> 24) == 0x00) {
x += (Math.abs(xStep) * addX);
y += (Math.abs(yStep) * addY);
} else {
return new Point(xVal, yVal);
}
}
return null;
}
}
The algorithm should be returning all positive points that are all ordered in counterclockwise rotation (and non-overlapping) but I have failed to get the desired output (this being my most recent attempt) so just to restate the question, is there a formalized way of doing this, or can someone find the mistake I made in my logic. For visual reference, the Mario and Yoshi Traced image is sort of what the final output should look like, but with many more points (which would give more detail to the mesh).

Rotating all rectangle corners

I'm working on a game where you are a spaceship. This spaceship has to be able to rotate. The rectangle has two arrays x[], y[] containing all the corners positions of the rectangle. But when I apply the rotation formula, I get a rather wierd rotation. To try to explain it, it looks like it's rotating the bottom left of the screen.
To make these corner arrays i take in an x position, y position, width and height.
Making of the corner-arrays
public Vertex2f(float x, float y, float w, float h){
this.x[0] = x;
this.y[0] = y;
this.x[1] = x+w;
this.y[1] = y;
this.x[2] = x+w;
this.y[2] = y+h;
this.x[3] = x;
this.y[3] = y+h;
}
My rotation function
public void rotate(float angle){
this.rotation = angle;
double cos = Math.cos(rotation);
double sin = Math.sin(rotation);
for(int i = 0; i < x.length; i++){
x[i] = (float)(cos * x[i] - sin * y[i]);
y[i] = (float)(sin * x[i] + cos * y[i]);
}
}
If it helps I am using LWJGL/OpenGL in java for all the graphics and Slick2d to load and init the sprites I am using.
Try this one:
public void rotate(float angle){
this.rotation = angle;
double cos = Math.cos(rotation);
double sin = Math.sin(rotation);
double xOffset = (x[0]+x[2])/2;
double yOffset = (y[0]+y[2])/2;
for(int i = 0; i < 3; i++){
x[i] = (float)(cos * (x[i]-xOffset) - sin * (y[i]-yOffset)) + xOffset;
y[i] = (float)(sin * (x[i]-xOffset) + cos * (y[i]-yOffset)) + yOffset;
}
}
you have to rotate around center of your rectangle. Otherwise center is in x=0 and y=0
edited:
public void rotate(float angle){
this.rotation = angle;
double cos = Math.cos(rotation);
double sin = Math.sin(rotation);
double xOffset = (x[0]+x[2])/2;
double yOffset = (y[0]+y[2])/2;
for(int i = 0; i < 3; i++){
double newX = (float)(cos * (x[i]-xOffset) - sin * (y[i]-yOffset)) + xOffset;
double newY = (float)(sin * (x[i]-xOffset) + cos * (y[i]-yOffset)) + yOffset;
x[i] = newX;
y[i] = newY;
}
}
see other thread
The problem with the formulas
x[i] = (float)(cos * x[i] - sin * y[i]);
y[i] = (float)(sin * x[i] + cos * y[i]);
apart from the missing rotation center is that you change x[i] in the first formula but expect to use the original value in the second formula. Thus you need to use local variables lx, ly as in
float lx = x[i] - xcenter;
float ly = y[i] - ycenter;
x[i] = xcenter + (float)(cos * lx - sin * ly);
y[i] = ycenter + (float)(sin * lx + cos * ly);
If the object already is rotated with an angle of rotation, then this code adds the angle angle to the total rotation angle. If instead the given argument angle is to be the new total rotation angle, then the sin and cos values need to be computed with the angle difference. That is, start the procedure with, for instance,
public void rotate(float angle){
double cos = Math.cos(angle - rotation);
double sin = Math.sin(angle - rotation);
this.rotation = angle;

Get average color on bufferedimage and bufferedimage portion as fast as possible

I am trying to find image in an image. I do this for desktop automation. At this moment, I'm trying to be fast, not precise. As such, I have decided to match similar image solely based on the same average color.
If I pick several icons on my desktop, for example:
And I will search for the last one (I'm still wondering what this file is):
You can clearly see what is most likely to be the match:
In different situations, this may not work. However when image size is given, it should be pretty reliable and lightning fast.
I can get a screenshot as BufferedImage object:
MSWindow window = MSWindow.windowFromName("Firefox", false);
BufferedImage img = window.screenshot();
//Or, if I can estimate smaller region for searching:
BufferedImage img2 = window.screenshotCrop(20,20,50,50);
Of course, the image to search image will be loaded from template saved in a file:
BufferedImage img = ImageIO.read(...whatever goes in there, I'm still confused...);
I explained what all I know so that we can focus on the only problem:
Q: How can I get average color on buffered image? How can I get such average color on sub-rectangle of that image?
Speed wins here. In this exceptional case, I consider it more valuable than code readability.
I think that no matter what you do, you are going to have an O(wh) operation, where w is your width and h is your height.
Therefore, I'm going to post this (naive) solution to fulfil the first part of your question as I do not believe there is a faster solution.
/*
* Where bi is your image, (x0,y0) is your upper left coordinate, and (w,h)
* are your width and height respectively
*/
public static Color averageColor(BufferedImage bi, int x0, int y0, int w,
int h) {
int x1 = x0 + w;
int y1 = y0 + h;
long sumr = 0, sumg = 0, sumb = 0;
for (int x = x0; x < x1; x++) {
for (int y = y0; y < y1; y++) {
Color pixel = new Color(bi.getRGB(x, y));
sumr += pixel.getRed();
sumg += pixel.getGreen();
sumb += pixel.getBlue();
}
}
int num = w * h;
return new Color(sumr / num, sumg / num, sumb / num);
}
There is a constant time method for finding the mean colour of a rectangular section of an image but it requires a linear preprocess. This should be fine in your case. This method can also be used to find the mean value of a rectangular prism in a 3d array or any higher dimensional analog of the problem. I will be using a gray scale example but this can be easily extended to 3 or more channels simply by repeating the process.
Lets say we have a 2 dimensional array of numbers we will call "img".
The first step is to generate a new array of the same dimensions where each element contains the sum of all values in the original image that lie within the rectangle that bounds that element and the top left element of the image.
You can use the following method to construct such an image in linear time:
int width = 1920;
int height = 1080;
//source data
int[] img = GrayScaleScreenCapture();
int[] helperImg = int[width * height]
for(int y = 0; y < height; ++y)
{
for(int x = 0; x < width; ++x)
{
int total = img[y * width + x];
if(x > 0)
{
//Add value from the pixel to the left in helperImg
total += helperImg[y * width + (x - 1)];
}
if(y > 0)
{
//Add value from the pixel above in helperImg
total += helperImg[(y - 1) * width + x];
}
if(x > 0 && y > 0)
{
//Subtract value from the pixel above and to the left in helperImg
total -= helperImg[(y - 1) * width + (x - 1)];
}
helperImg[y * width + x] = total;
}
}
Now we can use helperImg to find the total of all values within a given rectangle of img in constant time:
//Some Rectangle with corners (x0, y0), (x1, y0) , (x0, y1), (x1, y1)
int x0 = 50;
int x1 = 150;
int y0 = 25;
int y1 = 200;
int totalOfRect = helperImg[y1 * width + x1];
if(x0 > 0)
{
totalOfRect -= helperImg[y1 * width + (x0 - 1)];
}
if(y0 > 0)
{
totalOfRect -= helperImg[(y0 - 1) * width + x1];
}
if(x0 > 0 && y0 > 0)
{
totalOfRect += helperImg[(y0 - 1) * width + (x0 - 1)];
}
Finally, we simply divide totalOfRect by the area of the rectangle to get the mean value:
int rWidth = x1 - x0 + 1;
int rheight = y1 - y0 + 1;
int meanOfRect = totalOfRect / (rWidth * rHeight);
Here's a version based on k_g's answer for a full BufferedImage with adjustable sample precision (step).
public static Color getAverageColor(BufferedImage bi) {
int step = 5;
int sampled = 0;
long sumr = 0, sumg = 0, sumb = 0;
for (int x = 0; x < bi.getWidth(); x++) {
for (int y = 0; y < bi.getHeight(); y++) {
if (x % step == 0 && y % step == 0) {
Color pixel = new Color(bi.getRGB(x, y));
sumr += pixel.getRed();
sumg += pixel.getGreen();
sumb += pixel.getBlue();
sampled++;
}
}
}
int dim = bi.getWidth()*bi.getHeight();
// Log.info("step=" + step + " sampled " + sampled + " out of " + dim + " pixels (" + String.format("%.1f", (float)(100*sampled/dim)) + " %)");
return new Color(Math.round(sumr / sampled), Math.round(sumg / sampled), Math.round(sumb / sampled));
}

Rotating a BufferedImage and Saving it into a pixel array

I am trying to properly rotate a sword in my 2D game. I have a sword image file, and I wish to rotate the image at the player's location. I tried using Graphics2D and AffineTransform, but the problem is that the player moves on a different coordinate plane, the Screen class, and the Graphics uses the literal location of the pixels on the JFrame. So, I realized that I need to render the sword by rotating the image itself, and then saving it into a pixel array for my screen class to render. However, I don't know how to do this. Here is the code for my screen rendering method:
public void render(double d, double yOffset2, BufferedImage image, int colour,
int mirrorDir, double scale, SpriteSheet sheet) {
d -= xOffset;
yOffset2 -= yOffset;
boolean mirrorX = (mirrorDir & BIT_MIRROR_X) > 0;
boolean mirrorY = (mirrorDir & BIT_MIRROR_Y) > 0;
double scaleMap = scale - 1;
for (int y = 0; y < image.getHeight(); y++) {
int ySheet = y;
if (mirrorY)
ySheet = image.getHeight() - 1 - y;
int yPixel = (int) (y + yOffset2 + (y * scaleMap) - ((scaleMap * 8) / 2));
for (int x = 0; x < image.getWidth(); x++) {
int xPixel = (int) (x + d + (x * scaleMap) - ((scaleMap * 8) / 2));
int xSheet = x;
if (mirrorX)
xSheet = image.getWidth() - 1 - x;
int col = (colour >> (sheet.pixels[xSheet + ySheet
* sheet.width])) & 255;
if (col < 255) {
for (int yScale = 0; yScale < scale; yScale++) {
if (yPixel + yScale < 0 || yPixel + yScale >= height)
continue;
for (int xScale = 0; xScale < scale; xScale++) {
if (x + d < 0 || x + d >= width)
continue;
pixels[(xPixel + xScale) + (yPixel + yScale)
* width] = col;
}
}
}
}
}
}
Here is one of my poor attempts to call the render method from the Sword Class:
public void render(Screen screen) {
AffineTransform at = new AffineTransform();
at.rotate(1, image.getWidth() / 2, image.getHeight() / 2);
AffineTransformOp op = new AffineTransformOp(at,
AffineTransformOp.TYPE_BILINEAR);
image = op.filter(image, null);
screen.render(this.x, this.y, image, SwordColor, 1, 1.5, sheet);
hitBox.setLocation((int) this.x, (int) this.y);
for (Entity entity : level.getEntities()) {
if (entity instanceof Mob) {
if (hitBox.intersects(((Mob) entity).hitBox)) {
// ((Mob) entity).health--;
}
}
}
}
Thank you for any help you can provide, and please feel free to tell me if theres a better way to do this.
You can rotate() the image around an anchor point, also seen here in a Graphics2D context. The method concatenates translate(), rotate() and translate() operations, also seen here as explicit transformations.
Addendum: It rotates the image, but how do I save the pixels of the image as an array?
Once you filter() the image, use one of the ImageIO.write() methods to save the resulting RenderedImage, for example.

Categories

Resources