LWJGL - Get part of image byte buffer - java

Hello I am fairly new to OpenGL and LWJGL
I have loaded an Image into a ByteBuffer using the STBImage binding in LWJGL. I can draw the image to the screen, this works perfectly fine.
Now I want to "slice" an image into multiple smaller images. I need it for my Tileset system
I only got it working to slice the image into the individual tiles.
But they are not correctly drawn. I think I know what the problem is, but I don't really get it to work how it should.
I think this is the problem:
int s = x * 4 + y * 4;
Here is the entire slicing function.
private void sliceTileset(Image[] images, float tileWidth, float tileHeight) {
// The pixelbuffer of the entire tileset image (Loaded via STBImage.stbi_load)
ByteBuffer tilesetPixelBuffer = tilesetImage.getPixelBuffer();
int xOffset = 0;
int yOffset = 0;
// Loop over all available tile slots ((tileSetWidth / tileWidth) +
// (tileSetHeight / tileHeight))
for (int i = 0; i < images.length; i++) {
ByteBuffer buffer = ByteBuffer.allocateDirect((int) (tileWidth * tileHeight * 4));
for (int y = 0; y < tileHeight; y++) {
for (int x = xOffset; x < tileWidth + xOffset; x++) {
int s = x * 4 + y * 4;
// get the pixel from the tileset image buffer
int r = tilesetPixelBuffer.get(s);
int g = tilesetPixelBuffer.get(s + 1);
int b = tilesetPixelBuffer.get(s + 2);
int a = tilesetPixelBuffer.get(s + 3);
// put it into the tile image buffer
buffer.put((byte) r);
buffer.put((byte) g);
buffer.put((byte) b);
buffer.put((byte) a);
}
}
// Create a new image with the tile image buffer
images[i] = new Image(buffer, tileWidth, tileHeight);
buffer.flip();
// Offset the x-position for the next tile
xOffset += tileWidth;
// Offset the y-position for the next tile
yOffset += tileHeight; // Currently does nothing
}
}
If your need more information just ask for it!

Related

I try to rotat without lib but it make black points in picture

I am trying to rotate image without standard method , making color array and manipulate it, but when I invoke the, rotation I get black points (look the picture)
Here is my code, colScaled is the picture I am trying to convert to an array:
public void arrays() {
colScaled = zoom2();
int j = 0;
int i = 0;
angel = Integer.parseInt(this.mn.jTextField1.getText());
float degree = (float) Math.toRadians(angel);
float cos = (float) Math.cos(degree);
float sin = (float) Math.sin(degree);
int W = Math.round(colScaled[0].length * Math.abs(sin) + colScaled.length * Math.abs(cos));
int H = Math.round(colScaled[0].length * Math.abs(cos) + colScaled.length * Math.abs(sin));
int x;
int y;
int xn = (int) W / 2;
int yn = (int) H / 2;
int hw = (int) colScaled.length / 2;
int hh = (int) colScaled[0].length / 2;
BufferedImage image = new BufferedImage(W + 1, H + 1, im.getType());
for (i = 0; i < colScaled.length; i++) {
for (j = 0; j < colScaled[0].length; j++) {
x = Math.round((i - hw) * cos - (j - hh) * sin + xn);
y = Math.round((i - hw) * sin + (j - hh) * cos + yn);
image.setRGB(x, y, colScaled[i][j]);
}
}
ImageIcon ico = new ImageIcon(image);
this.mn.jLabel1.setIcon(ico);
}
Notice this block in your code :-
for (i = 0; i < colScaled.length; i++) {
for (j = 0; j < colScaled[0].length; j++) {
x = Math.round((i - hw) * cos - (j - hh) * sin + xn);
y = Math.round((i - hw) * sin + (j - hh) * cos + yn);
image.setRGB(x, y, colScaled[i][j]);
}
}
The x and y is pixel coordinate in source image (colScaled).
The objective of this code is to fill all pixels in destination image (image).
In your loop, there is no guarantee that all pixels in the destination image will be filled, even it is in the rectangle zone.
The above image depict the problem.
See? It is possible that the red pixel in the destination image will not be written.
The correct solution is to iterating pixel in destination image, then find a corresponding pixel in source image later.
Edit: After posting, I just saw the Spektre's comment.
I agree, it seems to be a duplicated question. The word "pixel array" made me thing it is not.

Get average color on bufferedimage and bufferedimage portion as fast as possible

I am trying to find image in an image. I do this for desktop automation. At this moment, I'm trying to be fast, not precise. As such, I have decided to match similar image solely based on the same average color.
If I pick several icons on my desktop, for example:
And I will search for the last one (I'm still wondering what this file is):
You can clearly see what is most likely to be the match:
In different situations, this may not work. However when image size is given, it should be pretty reliable and lightning fast.
I can get a screenshot as BufferedImage object:
MSWindow window = MSWindow.windowFromName("Firefox", false);
BufferedImage img = window.screenshot();
//Or, if I can estimate smaller region for searching:
BufferedImage img2 = window.screenshotCrop(20,20,50,50);
Of course, the image to search image will be loaded from template saved in a file:
BufferedImage img = ImageIO.read(...whatever goes in there, I'm still confused...);
I explained what all I know so that we can focus on the only problem:
Q: How can I get average color on buffered image? How can I get such average color on sub-rectangle of that image?
Speed wins here. In this exceptional case, I consider it more valuable than code readability.
I think that no matter what you do, you are going to have an O(wh) operation, where w is your width and h is your height.
Therefore, I'm going to post this (naive) solution to fulfil the first part of your question as I do not believe there is a faster solution.
/*
* Where bi is your image, (x0,y0) is your upper left coordinate, and (w,h)
* are your width and height respectively
*/
public static Color averageColor(BufferedImage bi, int x0, int y0, int w,
int h) {
int x1 = x0 + w;
int y1 = y0 + h;
long sumr = 0, sumg = 0, sumb = 0;
for (int x = x0; x < x1; x++) {
for (int y = y0; y < y1; y++) {
Color pixel = new Color(bi.getRGB(x, y));
sumr += pixel.getRed();
sumg += pixel.getGreen();
sumb += pixel.getBlue();
}
}
int num = w * h;
return new Color(sumr / num, sumg / num, sumb / num);
}
There is a constant time method for finding the mean colour of a rectangular section of an image but it requires a linear preprocess. This should be fine in your case. This method can also be used to find the mean value of a rectangular prism in a 3d array or any higher dimensional analog of the problem. I will be using a gray scale example but this can be easily extended to 3 or more channels simply by repeating the process.
Lets say we have a 2 dimensional array of numbers we will call "img".
The first step is to generate a new array of the same dimensions where each element contains the sum of all values in the original image that lie within the rectangle that bounds that element and the top left element of the image.
You can use the following method to construct such an image in linear time:
int width = 1920;
int height = 1080;
//source data
int[] img = GrayScaleScreenCapture();
int[] helperImg = int[width * height]
for(int y = 0; y < height; ++y)
{
for(int x = 0; x < width; ++x)
{
int total = img[y * width + x];
if(x > 0)
{
//Add value from the pixel to the left in helperImg
total += helperImg[y * width + (x - 1)];
}
if(y > 0)
{
//Add value from the pixel above in helperImg
total += helperImg[(y - 1) * width + x];
}
if(x > 0 && y > 0)
{
//Subtract value from the pixel above and to the left in helperImg
total -= helperImg[(y - 1) * width + (x - 1)];
}
helperImg[y * width + x] = total;
}
}
Now we can use helperImg to find the total of all values within a given rectangle of img in constant time:
//Some Rectangle with corners (x0, y0), (x1, y0) , (x0, y1), (x1, y1)
int x0 = 50;
int x1 = 150;
int y0 = 25;
int y1 = 200;
int totalOfRect = helperImg[y1 * width + x1];
if(x0 > 0)
{
totalOfRect -= helperImg[y1 * width + (x0 - 1)];
}
if(y0 > 0)
{
totalOfRect -= helperImg[(y0 - 1) * width + x1];
}
if(x0 > 0 && y0 > 0)
{
totalOfRect += helperImg[(y0 - 1) * width + (x0 - 1)];
}
Finally, we simply divide totalOfRect by the area of the rectangle to get the mean value:
int rWidth = x1 - x0 + 1;
int rheight = y1 - y0 + 1;
int meanOfRect = totalOfRect / (rWidth * rHeight);
Here's a version based on k_g's answer for a full BufferedImage with adjustable sample precision (step).
public static Color getAverageColor(BufferedImage bi) {
int step = 5;
int sampled = 0;
long sumr = 0, sumg = 0, sumb = 0;
for (int x = 0; x < bi.getWidth(); x++) {
for (int y = 0; y < bi.getHeight(); y++) {
if (x % step == 0 && y % step == 0) {
Color pixel = new Color(bi.getRGB(x, y));
sumr += pixel.getRed();
sumg += pixel.getGreen();
sumb += pixel.getBlue();
sampled++;
}
}
}
int dim = bi.getWidth()*bi.getHeight();
// Log.info("step=" + step + " sampled " + sampled + " out of " + dim + " pixels (" + String.format("%.1f", (float)(100*sampled/dim)) + " %)");
return new Color(Math.round(sumr / sampled), Math.round(sumg / sampled), Math.round(sumb / sampled));
}

Rotating a BufferedImage and Saving it into a pixel array

I am trying to properly rotate a sword in my 2D game. I have a sword image file, and I wish to rotate the image at the player's location. I tried using Graphics2D and AffineTransform, but the problem is that the player moves on a different coordinate plane, the Screen class, and the Graphics uses the literal location of the pixels on the JFrame. So, I realized that I need to render the sword by rotating the image itself, and then saving it into a pixel array for my screen class to render. However, I don't know how to do this. Here is the code for my screen rendering method:
public void render(double d, double yOffset2, BufferedImage image, int colour,
int mirrorDir, double scale, SpriteSheet sheet) {
d -= xOffset;
yOffset2 -= yOffset;
boolean mirrorX = (mirrorDir & BIT_MIRROR_X) > 0;
boolean mirrorY = (mirrorDir & BIT_MIRROR_Y) > 0;
double scaleMap = scale - 1;
for (int y = 0; y < image.getHeight(); y++) {
int ySheet = y;
if (mirrorY)
ySheet = image.getHeight() - 1 - y;
int yPixel = (int) (y + yOffset2 + (y * scaleMap) - ((scaleMap * 8) / 2));
for (int x = 0; x < image.getWidth(); x++) {
int xPixel = (int) (x + d + (x * scaleMap) - ((scaleMap * 8) / 2));
int xSheet = x;
if (mirrorX)
xSheet = image.getWidth() - 1 - x;
int col = (colour >> (sheet.pixels[xSheet + ySheet
* sheet.width])) & 255;
if (col < 255) {
for (int yScale = 0; yScale < scale; yScale++) {
if (yPixel + yScale < 0 || yPixel + yScale >= height)
continue;
for (int xScale = 0; xScale < scale; xScale++) {
if (x + d < 0 || x + d >= width)
continue;
pixels[(xPixel + xScale) + (yPixel + yScale)
* width] = col;
}
}
}
}
}
}
Here is one of my poor attempts to call the render method from the Sword Class:
public void render(Screen screen) {
AffineTransform at = new AffineTransform();
at.rotate(1, image.getWidth() / 2, image.getHeight() / 2);
AffineTransformOp op = new AffineTransformOp(at,
AffineTransformOp.TYPE_BILINEAR);
image = op.filter(image, null);
screen.render(this.x, this.y, image, SwordColor, 1, 1.5, sheet);
hitBox.setLocation((int) this.x, (int) this.y);
for (Entity entity : level.getEntities()) {
if (entity instanceof Mob) {
if (hitBox.intersects(((Mob) entity).hitBox)) {
// ((Mob) entity).health--;
}
}
}
}
Thank you for any help you can provide, and please feel free to tell me if theres a better way to do this.
You can rotate() the image around an anchor point, also seen here in a Graphics2D context. The method concatenates translate(), rotate() and translate() operations, also seen here as explicit transformations.
Addendum: It rotates the image, but how do I save the pixels of the image as an array?
Once you filter() the image, use one of the ImageIO.write() methods to save the resulting RenderedImage, for example.

Image processing using java

I am writing a java code that divides an image into chunks and rotate to some degree and combine the chunks to become one final image. Then use same code to divide the image into chunks and rotate opposite. I expect to get the same image as the original but I get an image with black line separated between them. For example an image is divided into 8 rows and 8 columns and conduct rotation. I have googled it and come up with the following code:
public static BufferedImage Didvide( BufferedImage image , int Bz ,double angle ){
int rows = Bz;
int cols = Bz;
int chunks = rows * cols;
int chunkWidth = image.getWidth() / cols;
int chunkHeight = image.getHeight() / rows;
int count = 0;
BufferedImage imgs[] = new BufferedImage[chunks];
for (int x = 0; x < rows; x++) {
for (int y = 0; y < cols; y++) {
imgs[count] = new BufferedImage(chunkWidth, chunkHeight,
image.getType());
// draws image chunk
Graphics2D gr = imgs[count++].createGraphics();
gr.drawImage(image, 0, 0, chunkWidth, chunkHeight, chunkWidth
* y, chunkHeight * x, chunkWidth * y + chunkWidth,
chunkHeight * x + chunkHeight, null);
gr.dispose();
}
}
BufferedImage[] Rimgs = new BufferedImage[imgs.length];
for (int i = 0; i < imgs.length; i++) {
Rimgs[i] = rotate(imgs[i], angle);
}
chunkWidth = Rimgs[0].getWidth();
chunkHeight = Rimgs[0].getHeight();
// Initializing the final image
BufferedImage finalImg = new BufferedImage(chunkWidth * cols,
chunkHeight * rows, BufferedImage.TYPE_3BYTE_BGR);
int num = 0;
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
finalImg.createGraphics().drawImage(Rimgs[num], chunkWidth * j,
chunkHeight * i, null);
num++;
} } return finalImg; }
public static BufferedImage rotate(BufferedImage image, double angle) {
double sin = Math.abs(Math.sin(angle)), cos = Math.abs(Math.cos(angle));
int w = image.getWidth(), h = image.getHeight();
int neww = (int) Math.floor(w * cos + h * sin), newh = (int) Math
.floor(h * cos + w * sin);
GraphicsConfiguration gc = getDefaultConfiguration();
BufferedImage result = gc.createCompatibleImage(neww, newh,
Transparency.OPAQUE);
Graphics2D g = result.createGraphics();
g.translate((neww - w) / 2, (newh - h) / 2);
g.rotate(angle, w / 2, h / 2);
g.drawRenderedImage(image, null);
g.dispose();
return result;
}
The problem I face after dividing an image of baboo gray-scale 298X298 pixel into 8 col and 8 row, the resulting image has black lines separating columns. However when I divide the image into 12 or 4 it works fine. Can you please let me know where I should consider.
Seems I can not post image.
When I divide and rotate the image into 8 rows and 8 columns of an image with 298X298, I get a result of 296X296 pixel. How can I fix this. So the size of before dividing and rotating is same as after.
Thanks in advance for your help.

Fastest way to load thumbnail pixel values into Java

I need to be able to load RGB pixel values at a certain resolution into Java. That resolution is small (~300x300).
Currently, I've been loading them like this:
File file = new File("...path...");
BufferedImage imsrc = ImageIO.read(file);
int width = imsrc.getWidth();
int height = imsrc.getHeight();
int[] data = new int[width * height];
imsrc.getRGB(0,0, width, height, data, 0, width);
and then downsizing it myself.
Sam asked for the down-sizing code, so here it is:
/**
* DownSize an image.
* This is NOT precise, and is noisy.
* However, this is fast and better than NearestNeighbor
* #param pixels - _RGB pixel values for the original image
* #param width - width of the original image
* #param newWidth - width of the new image
* #param newHeight - height of the new image
* #return - _RGB pixel values of the resized image
*/
public static int[] downSize(int[] pixels, int width, int newWidth, int newHeight) {
int height = pixels.length / width;
if (newWidth == width && height == newHeight) return pixels;
int[] resized = new int[newWidth * newHeight];
float x_ratio = (float) width / newWidth;
float y_ratio = (float) height / newHeight;
float xhr = x_ratio / 2;
float yhr = y_ratio / 2;
int i, j, k, l, m;
for (int x = 0; x < newWidth; x ++)
for (int y = 0; y < newHeight; y ++) {
i = (int) (x * x_ratio);
j = (int) (y * y_ratio);
k = (int) (x * x_ratio + xhr);
l = (int) (y * y_ratio + yhr);
for (int p = 0; p < 3; p ++) {
m = 0xFF << (p * 8);
resized[x + y * newWidth] |= (
(pixels[i + j * width] & m) +
(pixels[k + j * width] & m) +
(pixels[i + l * width] & m) +
(pixels[k + l * width] & m) >> 2) & m;
}
}
return resized;
}
Recently, I realized that I can down-size with ImageMagick's 'convert' and then load the down-sized version that way. This saves additional 33%.
I was wondering, if there's an even better way.
EDIT: I realized that some people would wonder if my code is good in general, and the answer is NO. The code I used works well for me, because I down-size already small images (say 640x480, otherwise .getRGB() takes forever) and I don't care if a couple of color points spill over (carry-over from addition), and I know some people really care about that.
Here's a very good article on generating thumbnails in Java in an optimal way:
http://today.java.net/pub/a/today/2007/04/03/perils-of-image-getscaledinstance.html
You may have better results with specifying different scaling/rendering parameters.
Graphics2D g2 = (Graphics2D)g;
int newW = (int)(originalImage.getWidth() * xScaleFactor);
int newH = (int)(originalImage.getHeight() * yScaleFactor);
g2.setRenderingHint(RenderingHints.KEY_INTERPOLATION,
RenderingHints.VALUE_INTERPOLATION_NEAREST_NEIGHBOR);
g2.drawImage(originalImage, 0, 0, newW, newH, null);

Categories

Resources