I use OpenGL with Lwjgl in Java.
I just want to render my texture with alpha layer, but actually, it looks like this :
http://puu.sh/8FRzn.png
My configuration of OpenGL :
glEnable(GL_TEXTURE_2D);
glBlendFunc (GL_ONE, GL_ONE);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
GL11.glViewport(0, 0, screenWidth, screenHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, screenWidth, screenHeight, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
Et how I load my sprite sheet :
BLA BLA BLA ..
public static HashMap<String, Integer> loadTexture(String path) {
BufferedImage image = null;
HashMap<String, Integer> hashMap = new HashMap<String, Integer>();
try {
image = ImageIO.read(new File(path));
} catch (IOException e) {
e.printStackTrace();
}
int w = image.getWidth();
int h = image.getHeight();
int[] pixels = image.getRGB(0, 0, w, h, null, 0, w);
ByteBuffer buffer = BufferUtils.createByteBuffer(w * h *4);
for(int x = 0; x<w; x++) {
for(int y = 0; y<h; y++) {
Color color = new Color(pixels[x + y * w]);
buffer.put((byte)color.getRed());
buffer.put((byte)color.getGreen());
buffer.put((byte)color.getBlue());
buffer.put((byte)color.getAlpha());
}
}
buffer.flip();
int id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
hashMap.put("width", w);
hashMap.put("height", h);
hashMap.put("id", id);
return hashMap;
}
And I render it with common method : GL_QUADS
Yes, my texture have a Alpha Layer : http://puu.sh/8FS15.png
I already searched across the web to find answer but my OpenGL "Configuration" is the same as the solutions given each time ..
I believe that this is an issue with BufferedImage not having the alpha channel enabled. Try creating you Buffered Image first like this.
By default when you load an image as a BufferedImage, it does not load the alpha layers of said image. You must define this parameter in the BufferedImage constructor. However the problem is, that when loading an image through ImageIO, the BufferedImage is created for you and you are not able to set this option. By creating an empty image with the width and height of the first image, you are able to set this parameter in the new image. You can then modify the pixels of that image through use of the Graphics2D class, which will allow you to draw the original image(the one loaded through ImageIO onto the new one that has alpha support). It may seem a bit jumbled, but the code example below should be able to refine this explanation a bit.
BufferedImage image = new BufferedImage(int width, int height, BufferedImage.TYPE_INT_ARGB);
This will tell it to load the alpha channel with the BufferedImage.
You will need to load the image in a separate BufferedImage first to get the width and height, then redraw it with a graphics class.
Here is a full example:
try {
BufferedImage in = ImageIO.read(new FileInputStream(path));
BufferedImage image = new BufferedImage(in.getWidth(), in.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g = image.createGraphics();
g.drawImage(in, 0, 0, null);
g.dispose();
this.image = image;
} catch(IOException e) {
e.printStackTrace();
}
The final this.image refers to your main image in the start of the class.
Related
I try to scale image to 50x50 px, but I got black color. I need to make black to white
after scaled
this my code:
BufferedImage imgs = urlToBufferImage("src//imgTest.jpg");
BufferedImage resizedImage = new BufferedImage(50, 50, imgs.getType());
Graphics2D g = resizedImage.createGraphics();
// g.setBackground(Color.WHITE);
// g.drawImage(imgs, 0, 0, 50, 50,Color.WHITE, null);
g.drawImage(imgs.getScaledInstance(50, -1, Image.SCALE_DEFAULT), 0, 0, this);
g.dispose();
This is pretty simple.
My approach would be not to create a new BufferedImage, but to do:
BufferedImage imgs = urlToBufferImage("src//imgTest.jpg");
Graphics g = imgs.createGraphics();
g.drawImage(imgs, x, y, 50, 50, null);
or instead of drawing the image inside of the bounds, you could do
Graphics2D g2d = imgs.createGraphics();
g2d.scale(0.5, 0.5);
g2d.drawImage(imgs, x, y, null);
I am making a game with LWJGL and by using openGL, I believe my best option is to use Textures and render them with quads. However, I can only seem to find information on loading a texture from an image where the entire image is only ONE texture. What I would like to do is read an entire spritesheet in and be able to separate it into different textures. Is there a somewhat simple way to do this?
You could load the image, from e.g. a .png file to a BufferedImage with
public static BufferedImage loadImage(String location)
{
try {
BufferedImage image = ImageIO.read(new File(location));
return image;
} catch (IOException e) {
System.out.println("Could not load texture: " + location);
}
return null;
}
Now you are able to call getSubimage(int x, int y, int w, int h) on that resulting BufferedImage, giving you the seperated part. You now just need to create a Texture of the BufferedImage. This code should do the work:
public static int loadTexture(BufferedImage image){
if (image == null) {
return 0;
}
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, image.getWidth(), image.getHeight(), pixels, 0, image.getWidth());
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth() * image.getHeight() * BYTES_PER_PIXEL); //4 for RGBA, 3 for RGB
for(int y = 0; y < image.getHeight(); y++){
for(int x = 0; x < image.getWidth(); x++){
int pixel = pixels[y * image.getWidth() + x];
buffer.put((byte) ((pixel >> 16) & 0xFF)); // Red component
buffer.put((byte) ((pixel >> 8) & 0xFF)); // Green component
buffer.put((byte) (pixel & 0xFF)); // Blue component
buffer.put((byte) ((pixel >> 24) & 0xFF)); // Alpha component. Only for RGBA
}
}
buffer.flip(); //FOR THE LOVE OF GOD DO NOT FORGET THIS
// You now have a ByteBuffer filled with the color data of each pixel.
// Now just create a texture ID and bind it. Then you can load it using
// whatever OpenGL method you want, for example:
int textureID = glGenTextures();
glBindTexture(GL_TEXTURE_2D, textureID);
//setup wrap mode
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);
//setup texture scaling filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//Send texel data to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, image.getWidth(), image.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer); //GL_RGBA8 was GL_RGB8A
return textureID;
}
You are now able to bind the returned textureID with glBindTexture(GL_TEXTURE_2D, textureID); if you need the texture.
This way you only have to split the BufferedImage in the desired parts.
I recommend reading this: LWJGL Textures and Strings
I followed a tutorial for reading a picture and creating a texture out of it, however, it shows up flipped upside down when rendered. The image is power of two.
Main class
public class Main {
public static void main(String args[]) throws IOException{
Main quadExample = new Main();
quadExample.start();
}
public void start() throws IOException {
try {
Display.setDisplayMode(new DisplayMode(1280,720));
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(0);
}
// init OpenGL
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, 1280, 0, 720, -1, 1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glClearColor(0, 1, 0, 0);
GL11.glEnable(GL11.GL_TEXTURE_2D);
BufferedImage image = TextureLoader.loadImage("C:\\test.png");
final int textureID = TextureLoader.loadTexture(image);
while (!Display.isCloseRequested()) {
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(0, 0);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(256, 0);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(256, 256);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(0, 256);
GL11.glEnd();
Display.update();
}
Display.destroy();
}
}
Texture Loader
public class TextureLoader {
private static final int BYTES_PER_PIXEL = 4;
public static int loadTexture(BufferedImage image) {
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, image.getWidth(), image.getHeight(), pixels, 0,
image.getWidth());
ByteBuffer buffer = BufferUtils.createByteBuffer(image.getWidth()
* image.getHeight() * BYTES_PER_PIXEL);
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
int pixel = pixels[y * image.getWidth() + x];
buffer.put((byte) ((pixel >> 16) & 0xFF));
buffer.put((byte) ((pixel >> 8) & 0xFF));
buffer.put((byte) (pixel & 0xFF));
buffer.put((byte) ((pixel >> 24) & 0xFF));
}
}
buffer.flip();
int textureID = glGenTextures();
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, image.getWidth(),
image.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
return textureID;
}
public static BufferedImage loadImage(String location) {
try {
return ImageIO.read(new File(location));
} catch (IOException e) {
System.out.println(Errors.IOException);
e.printStackTrace();
}
return null;
}
Is there something wrong with the code or do I have to flip the image before creating the texture?
Most image formats store the data top to bottom. Unless you reshuffle the data while loading the image, this is also the sequence in memory after reading the image.
When you create an OpenGL texture from the loaded image, this memory order is maintained unless you explicitly change the order. So the order in texture memory is still top to bottom.
OpenGL does not really have an image/texture orientation. But when you use texture coordinates, they address the texture in the order it's stored in memory. This means for the two extreme values of the t-coordinate:
t = 0.0 corresponds to the start of the image in memory, which is the top edge of the image.
t = 1.0 corresponds to the end of the image in memory, which is the bottom edge of the image.
Now, looking at your draw calls:
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(0, 0);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(256, 0);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(256, 256);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(0, 256);
In the default OpenGL coordinate system, the y-coordinate goes bottom to top. So the first two vertices are the bottom vertices of the quad (since they have the smaller y-coordinate), the remaining two are the top two vertices.
Since you used t = 0.0 for the first two vertices, which are at the bottom of the quad, and t = 0.0 corresponds to the top of the image, the top of the image is at the bottom of the quad. Vice versa, you use t = 1.0 for the second two vertices, which are at the top of the quad, and t = 1.0 corresponds to the bottom of the image. Therefore, your image appears upside down.
By far the easiest way to fix this is to change the texture coordinates. Use t = 1.0 for the bottom two vertices, and t = 0.0 for the top two vertices, and the image orientation now matches the orientation of the quad on the screen:
GL11.glTexCoord2f(0.0f, 1.0f);
GL11.glVertex2f(0.0f, 0.0f);
GL11.glTexCoord2f(1.0f, 1.0f);
GL11.glVertex2f(256.0f, 0.0f);
GL11.glTexCoord2f(1.0f, 0.0f);
GL11.glVertex2f(256.0f, 256.0f);
GL11.glTexCoord2f(0.0f, 0.0f);
GL11.glVertex2f(0.0f, 256.0f);
Another option is that you flip the image while reading it in, for example by changing the order of your for loop from:
for (int y = 0; y < image.getHeight(); y++) {
to:
for (int y = image.getHeight() - 1; y >= 0; y--) {
But it's very common to have images in top-down order in memory, and you often don't have control over it if you're using system libraries/frameworks for reading them. So using the texture coordinates to render them in the desired direction is a frequently used approach, and IMHO preferable over shuffling the data around.
I searched for Texture Implementations without the Slick Utils library.
I found 2 ways, to do this:
The first, saves the pixels with strange byteshifting in a byte buffer:
int loadTexture(){
try{
BufferedImage img = ImageIO.read(getClass().getClassLoader().getResourceAsStream("background.png"));
int pixels[] = new int[img.getWidth() * img.getHeight()];
img.getRGB(0, 0, img.getWidth(), img.getHeight(), pixels, 0, img.getWidth());
ByteBuffer buffer = BufferUtils.createByteBuffer(img.getWidth() * img.getHeight() * 3);
for(int x = 0; x < img.getWidth(); x++){
for(int y = 0; y < img.getHeight(); y++){
int pixel = pixels[y * img.getWidth() + x];
buffer.put((byte) ((pixel >> 16) & 0xFF));
buffer.put((byte) ((pixel >> 8) & 0xFF));
buffer.put((byte) (pixel & 0xFF));
}
}
buffer.flip();
int textureId = glGenTextures();
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, img.getWidth(), img.getHeight(), 0, GL_RGB, GL_UNSIGNED_BYTE, buffer);
return textureId;
}
catch(Exception e){
e.printStackTrace();
return 0;
}
}
This returns a texture id as well, and i haven't any idea how tu use this id.
The second way doesnt do any byteshifting, and uses a IntBuffer: Also, it is a ready class to save different textures with names and so on.
The Code of these:
ublic class TextureIO {
private final IntBuffer texture;
private final int width;
private final int height;
private int id;
public TextureIO(final InputStream inputStream) throws IOException {
BufferedImage image = ImageIO.read(inputStream);
width = image.getWidth();
height = image.getHeight();
final AffineTransform tx = AffineTransform.getScaleInstance(1, -1);
tx.translate(0, -height);
final AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
image = op.filter(image, null);
final int[] pixels = image.getRGB(0, 0, width, height, null, 0, width);
texture = BufferUtils.createIntBuffer(pixels.length);
texture.put(pixels);
texture.rewind();
}
public void init() {
GL11.glEnable(GL11.GL_TEXTURE_2D);
final IntBuffer buffer = BufferUtils.createIntBuffer(1);
GL11.glGenTextures(buffer);
id = buffer.get(0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, id);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA8, width, height, 0, GL12.GL_BGRA, GL12.GL_UNSIGNED_INT_8_8_8_8_REV, texture);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
}
public void bind() {
GL11.glBindTexture(GL11.GL_TEXTURE_2D, id);
}
public void unbind() {
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
}
}
Im really new to lwjgl development, and want to know which version is better. Cause im a friend of implementing such things by myself, i want the lwjgl.jar to be the own library im using.
I read on different sites, the buffer.flip() method would be necassary. but why? And why the second version doesnt do this? Also, i want to understand the difference between this two implementations, what happens in the first and what in the second?
Thank you!
Both of those are pretty bad implementations IMO. I would recommend watching this video for a more standard approach. Although you would have to also use the PNGDecoder.jar library it is like an extension to the lwjgl library.
I've got some code that initializes OpenGL to render to a java.awt.Canvas.
The problem is, I can't figure out how I can get the buffer of the canvas and turn it into a BufferedImage.
I've tried overriding getGraphics(), cloning the Raster, and replacing the CanvasPeer with a custom one.
I'm guessing OpenGL doesn't use java graphics in any way then, so how can I get OpenGL's buffer and convert it into a BufferedImage?
I am using LWJGL's code for setting parent:
Display.setParent(display_parent);
Display.create();
You need to copy data from OpenGL buffer. I was using this method:
FloatBuffer grabScreen(GL gl)
{
int w = SCREENWITDH;
int h = SCREENHEIGHT;
FloatBuffer bufor = FloatBuffer.allocate(w*h*4); // 4 = rgba
gl.glReadBuffer(GL.GL_FRONT);
gl.glReadPixels(0, 0, w, h, GL.GL_RGBA, GL.GL_FLOAT, bufor); //Copy the image to the array imageData
return bufor;
}
You need to use something similar according to your OpenGL wrapper. This is JOGL example.
And here for LWJGL wrapper:
private static synchronized byte[] grabScreen()
{
int w = screenWidth;
int h = screenHeight;
ByteBuffer bufor = BufferUtils.createByteBuffer(w * h * 3);
GL11.glReadPixels(0, 0, w, h, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, bufor); //Copy the image to the array imageData
byte[] byteimg = new byte[w * h * 3];
bufor.get(byteimg, 0, byteimg.length);
return byteimg;
}
EDIT
This may be useful also (it's not fully mine, should be tuned too):
BufferedImage toImage(byte[] data, int w, int h)
{
if (data.length == 0)
return null;
DataBuffer buffer = new DataBufferByte(data, w * h);
int pixelStride = 3; //assuming r, g, b, skip, r, g, b, skip...
int scanlineStride = 3 * w; //no extra padding
int[] bandOffsets = { 0, 1, 2 }; //r, g, b
WritableRaster raster = Raster.createInterleavedRaster(buffer, w, h, scanlineStride, pixelStride, bandOffsets,
null);
ColorSpace colorSpace = ColorSpace.getInstance(ColorSpace.CS_sRGB);
boolean hasAlpha = false;
boolean isAlphaPremultiplied = true;
int transparency = Transparency.TRANSLUCENT;
int transferType = DataBuffer.TYPE_BYTE;
ColorModel colorModel = new ComponentColorModel(colorSpace, hasAlpha, isAlphaPremultiplied, transparency,
transferType);
BufferedImage image = new BufferedImage(colorModel, raster, isAlphaPremultiplied, null);
AffineTransform flip;
AffineTransformOp op;
flip = AffineTransform.getScaleInstance(1, -1);
flip.translate(0, -image.getHeight());
op = new AffineTransformOp(flip, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
image = op.filter(image, null);
return image;
}
I don't think this is possible for your situation, and here's why:
LWJGL doesn't draw directly to the canvas (at least not in Windows). The canvas is only used to obtain a window handle to provide as the parent window to OpenGL. As such, the canvas is never directly drawn to. To capture the contents, you'll probably have to resort to a screen capture.