I am attempting to follow ThinMatrix's water tutorial. To do this, I need to create an FBO and render it as a texture.
However as you can see, the water is completely black:
I am using the source code provided directly from the tutorial (copied from the link).
Basically, I create a FrameBuffer:
public FrameBufferObject() {//call when loading the game
initialiseReflectionFrameBuffer();
initialiseRefractionFrameBuffer(); //Let's ignore refraction for now, as I only use reflection at the moment.
}
private void initialiseReflectionFrameBuffer() {
reflectionFrameBuffer = createFrameBuffer();
reflectionTexture = createTextureAttachment(REFLECTION_WIDTH,REFLECTION_HEIGHT);
reflectionDepthBuffer = createDepthBufferAttachment(REFLECTION_WIDTH,REFLECTION_HEIGHT);
unbindCurrentFrameBuffer();
}
I then create a texture attachment:
private int createTextureAttachment( int width, int height) {
int texture = GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, texture);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height,
0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0,
texture, 0);
return texture;
}
And I also create a depth buffer attachment:
private int createDepthBufferAttachment(int width, int height) {
int depthBuffer = GL30.glGenRenderbuffers();
GL30.glBindRenderbuffer(GL30.GL_RENDERBUFFER, depthBuffer);
GL30.glRenderbufferStorage(GL30.GL_RENDERBUFFER, GL11.GL_DEPTH_COMPONENT, width,
height);
GL30.glFramebufferRenderbuffer(GL30.GL_FRAMEBUFFER, GL30.GL_DEPTH_ATTACHMENT,
GL30.GL_RENDERBUFFER, depthBuffer);
return depthBuffer;
}
Then, I am rendering the objects to the frame buffer object:
Main.TerrainDemo.shader.start();
fbos.bindReflectionFrameBuffer();
for (Grass g : Main.TerrainDemo.toDraw){
g.render();
}
fbos.unbindCurrentFrameBuffer();
I bind the frame buffer like this:
private void bindFrameBuffer(int frameBuffer, int width, int height){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);//To make sure the texture isn't bound
GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, frameBuffer);
GL11.glViewport(0, 0, width, height);
System.out.println("Bound");
if(GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER) == GL30.GL_FRAMEBUFFER_COMPLETE) {
System.out.println("Frame buffer setup is complete: " + GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER));
}
System.out.println("Error: " + GL11.glGetError());
}
The "error" when I print out glGetError() is a normal 0. The "frame buffer setup" message does print out.
After which, I expect that calling fbos.getReflectionTexture() would return a texture ID... And it does! It successfully returns texture ID 12. However, the texture when I bind it is completely black.
public int getReflectionTexture() {//get the resulting texture
return reflectionTexture; //Remember, this was our original texture attachment.
}
reflectionTexture = createTextureAttachment(REFLECTION_WIDTH,REFLECTION_HEIGHT);
I am unable to work out what is wrong, and why the texture is not displaying anything rendered to it.
Things I know are not wrong:
I am definitely drawing and texturing the water itself correctly. I can use any pre-loaded texture and texture the water just fine:
Also, Objects being rendered to the FBO have the correct translations, rotations, etc. If I don't bind any framebuffer, the foliage intended for the FBO is drawn (and seen) on my screen correctly.
So i realise this is a super old question that probably nobody cares about anymore, but i recently found myself here after doing the same tutorial and having similar problems.
After exercising a bit of google foo, i realised i had made a very basic error in my code that could be the same solution.
I simply wasn't doing;
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
while is was the frame buffer object being used, before filling it up again:
I just went ahead and put it at the end of the bind frame buffer function and i worked!
private void bindFrameBuffer(int frameBuffer, int width, int height){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);//To make sure the texture isn't bound
GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, frameBuffer);
GL11.glViewport(0, 0, width, height);
System.out.println("Bound");
if(GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER) == GL30.GL_FRAMEBUFFER_COMPLETE) {
System.out.println("Frame buffer setup is complete: " + GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER));
}
System.out.println("Error: " + GL11.glGetError());
}
Might not be the same problem as my error was particularly basic, but worth a try.
Does anyone know if this solution is correct, or is it a band aid over a larger problem?
Related
a have a problem with scaling images up. I have a png file that looks like this:
raw png
The image is 8px * 8px and has some red straight lines on it.
But when i draw this image with java and scale it up this happens: java image
And as you can barly see, the line is not exactly straight. It is always one pixel off and makes this kind of wavy shape. If the image gets drawn somewhere else on the frame the "waves" are somewhere else. The image is rotated 90° but I tested it without rotation and it was still there. Apart from this I do need rotated images.
Is there any way to fix this? I enabled text-antialiasing in the same Graphics2D object. Is there also some sort of anitaliasing for this?
Code
private void loadImage(String path, int field, int imageNumber) {
BufferedImage image;
image = new Resource().readImg(path);
images[field][imageNumber][0] = image;
images[field][imageNumber][1] = rotateClockwise90(image);
images[field][imageNumber][2] = rotateClockwise90(rotateClockwise90(image));
images[field][imageNumber][3] = rotateClockwise90(rotateClockwise90(rotateClockwise90(image)));
}
private BufferedImage rotateClockwise90(BufferedImage src) {
int width = src.getWidth();
int height = src.getHeight();
BufferedImage dest = new BufferedImage(height, width, src.getType());
Graphics2D graphics2D = dest.createGraphics();
graphics2D.translate((height - width) / 2, (height - width) / 2);
graphics2D.rotate(Math.PI / 2, height / 2, width / 2);
graphics2D.drawRenderedImage(src, null);
return dest;
}
When the program starts I load the image I rotate it in all 4 directions, so I don't have to do this over and over again while the program is running.
public BufferedImage getTile(int type, int part, int rotation) {
return images[type][part][rotation];
}
And then all I have to do is calling this get method and draw the image:
g2d.drawImage(tiles.getShipTile(type, part, rotation), x, y, null);
I actually found a way to avoid these weird pixels but this method makes the image a little bit blurry.
Instead of using
g2d.drawImage(img, x, y, width, height, null);
you can simply use
g2d.drawImage(img.getScaledInstance(width, height, Image.SCALE_AREA_AVERAGING), x, y, null);
which does basically the same thing but wehn I scale it up it uses this smooth making key.
I tried this and noticed that its not verry comfortable because it lags a lot.
So I just scale it up in the beginning when I also rotate the images.
As I said this method is a bit blurry but if there are no other ways avoiding this problem I have to get use of this. You almost can't see the blur, so this would be an option for me.
Edit: I've tried both glAttribIPointer() and glAttribPointer() with the types of GL_INT and GL_UNSGINED_INT. Changing the value to a constant in the shader works (fragColor = fragColor = texture(samplerArr, vec3(uv, 2));), so passing the int to the shader seems to be the issue.
That title's a mouthfull, but I think it explains pretty well the issue I'm experiencing. I've got this code that creates an array texture and is (hopefully) correct, as I can see some images. If I ask for the 0th image, I get the correct image, but if I use a texture id of any value larger than 0, the last texture in the texture array is the image I receive, not the expected. This is the code for creating the texture (the images can be seen, so I know the image to byte conversion works):
texture = glGenTextures();
glBindTexture(GL_TEXTURE_2D_ARRAY, texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // 1 byte per component
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST); // Pixel perfect (with mipmapping)
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA, w, h, images.length, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
try {
for (int i = 0; i < images.length; i ++) {
InputStream is = Texture.class.getResourceAsStream(images[i].getFullPath());
if (is == null) {
throw new FileNotFoundException("Could not locate texture file: " + images[i].getFullPath());
}
PNGDecoder img = new PNGDecoder(is);
ByteBuffer buff = ByteBuffer.allocateDirect(4 * img.getWidth() * img.getHeight());
img.decode(buff, img.getWidth() * 4, Format.RGBA);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4); // 1 byte per component
buff.flip();
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, i, w, h, 1, GL_RGBA, GL_UNSIGNED_BYTE, buff);
Debug.log("{} => {}", i, images[i].getFullPath());
}
glGenerateMipmap(GL_TEXTURE_2D_ARRAY);
} catch (Exception e) {
Debug.error("Unable to create texture from path: {}", path);
Debug.error(e, true);
}
I think most of that code is self explanatory, but I'd just like to make sure I'm doing it right. I then pass the texture id to the shader through this vertex attribute:
bindVertexArray();
glBindBuffer(GL_ARRAY_BUFFER, tbo);
glBufferData(GL_ARRAY_BUFFER, tBuff, GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 1, GL_INT, false, 0, 0);
glDisableVertexAttribArray(2);
unbindVertexBuffer();
memFree(tBuff);
unbindVertexArray();
I think that is also correct, as I can pass the texture ID through and receive an output. That leaves one piece that could be wrong (but I still think is right for the same reasons), and that's the fragment shader:
fragColor = texture(samplerArr, vec3(uv, textureId));
Texture id is defined as
flat in int textureId;
and the sampler is
uniform sampler2DArray samplerArr;
I even tried passing through a uniform with the total number of textures in the array and using
fragColor = texture(samplerArr, vec3(uv, float(textureId) / float(totalTextures)));
instead, but that didn't change any textures.
I don't see anything wrong with any of those pieces of code (except for the last one), but I am new to array textures (and OpenGL in general), so I was hoping someone out there has had or has solved this issue so they can guide me to the solution.
*I've been trying my very best to implement renderable texture functionality using OpenGL's framebuffering together with the LWJGL library from Java. However, the result that I always get is a 100% **black ** texture.*
I'm simply asking for some advice of what the problem is. I'm not rendering any specific shapes. I bind my generated framebuffer and call a glClearColor(1, 0, 0, 1); and then a glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); and then unbind the framebuffer. But when I try to render the texture bound to the framebuffer, the texture only shows black, where it actually should be red, right?
Also, the glCheckFramebufferStatus() returns GL_FRAMEBUFFER_COMPLETE so I suppose that the error lies within the rendering part, rather than the initialization phase. But I'll show the initialization code anyways.
The initialization code:
public RenderableTexture initialize(int width, int height, int internalFormat, int[] attachments, boolean useDepthBuffer) {
if(!GLContext.getCapabilities().GL_EXT_framebuffer_object) {
System.err.println("FrameBuffers not supported on your graphics card!");
}
this.width = width;
this.height = height;
hasDepthBuffer = useDepthBuffer;
fbo = glGenFramebuffers();
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, (ByteBuffer) null);
if(useDepthBuffer) {
rbo = glGenRenderbuffers();
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo);
}
glFramebufferTexture2D(GL_FRAMEBUFFER, attachments[0], GL_TEXTURE_2D, id, 0);
int[] drawBuffers = new int[attachments.length];
for(int i = 0; i < attachments.length; i++)
if(attachments[i] == GL_DEPTH_ATTACHMENT)
drawBuffers[i] = GL_NONE;
else
drawBuffers[i] = attachments[i];
glDrawBuffers(Util.toIntBuffer(drawBuffers));
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
System.err.println("Warning! Incomplete Framebuffer");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
return this;
}
internalFormat has the value of GL_RGBA8 and width and height have the value of 512 and 512. attachments[] only contains 1 value and that's GL_COLOR_ATTACHMENT0. useDepthBuffer is set to true.
The code above is only called once.
This is the rendering code:
public RenderManager draw() {
glClearColor(bg.x, bg.y, bg.z, bg.w);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
texture.bindAsRenderTarget(true);
texture.releaseRenderTarget();
quad.draw();
return this;
}
I set the clear color to black (0, 0, 0, 1) and then clear the screen. I then call texture.bindAsRenderTarget(true);. The texture object is the one who contains the initialize method from above so some variables are shared between that method and bindAsRenderTarget().
This method looks like this:
public RenderableTexture bindAsRenderTarget(boolean clear) {
glViewport(0, 0, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
glClearColor(1, 0, 0, 1f);
if(clear)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
return this;
}
As you can see I adjust the viewport to the size of the texture / framebuffer. I then bind the framebuffer and set the clear color to red. Then, since I passed true in the rendering code, it (as i believe) clears the currently bound framebuffer to red.
texture.releaseRenderTarget(); adjusts the viewport to fit the display and then calls glBindFramebuffer(GL_FRAMEBUFFER, 0);
The final line of code quad.draw(); simply binds the textureID of the texture bound to the framebuffer and then draws a simple quad with it.
That's pretty much all there is.
I can assure you that I'm rendering the quad correctly, since I can bind textures loaded from PNG files to it and the texture is successfully shown.
So to make things clear, the main question is pretty much:
Why on earth is the texture black after the clear as it should be red? Where and what am I doing wrong?
EDIT: I have a feeling that it might have to do with something about the bounding of different gl ojects. Does the renderbuffer have to be bound at the point of rendering to it's framebuffer? Does it not? Does it matter? How about the texture? at what points should they be?
I did something very stupid. The class that I initialized the fbo texture within (RenderableTextue.class) was a subclass of Texture.class. The binding method including the textureID was supposed to be inherited from the Texture class as I had declared the id variable as protected. However, I had accidently created a local private variable within the subclass, and thus, when generating the texture, saving the textureID to the local id variable and when binding, using the uninitialized id from the superclass. Sorry for anyone trying to solve this without being able to do so :s
I have a textured skydome. It renders white when an image is attached, but it does renders right when a color is given. I have reasons to assume the texture is overwritten, thus some tips on this would be great. It used to work fine, displaying the texture appropriately.
EDIT: If I print the texture directly to the fbo, it does show the texture. However when I map it to the sphere it shows up white. Give the sphere a color, and it shows correctly with the color. Also for the record, white is not the clear color. And I use an image that's quite large (3000x1000~).
ADD: No errors are given anywhere.
Changing:
glActiveTextureARB(GL_TEXTURE6_ARB);
glCallList(SkySphere.getDisplayList());
To:
glActiveTextureARB(GL_TEXTURE0_ARB);
glCallList(SkySphere.getDisplayList());
displays the proper image once, first cycle, then, white again.
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
glViewport(0,0,screenWidth,screenHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0f, ((float)screenWidth/(float)screenHeight),0.1f,100.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glShadeModel(GL_SMOOTH);
glHint(GL_PERSPECTIVE_CORRECTION_HINT,
GL_NICEST);
glDisable(GL_DEPTH_TEST);
glClearColor(1.0f,1.0f,0.0f,1.0f);
glClear (GL_COLOR_BUFFER_BIT);
glLoadIdentity ();
camera.look();
glEnable(GL_TEXTURE_2D);
glDisable(GL_LIGHTING);
glActiveTextureARB(GL_TEXTURE6_ARB);
glCallList(SkySphere.getDisplayList());
glDisable(GL_TEXTURE_2D);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
This is the skysphere code:
public static int loadTexture(String filename) {
ByteBuffer buf = null;
int tWidth = 0;
int tHeight = 0;
.. load png into buffer..
// Create a new texture object in memory and bind it
textureId = GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureId);
// All RGB bytes are aligned to each other and each component is 1 byte
GL11.glPixelStorei(GL11.GL_UNPACK_ALIGNMENT, 1);
// Upload the texture data and generate mip maps (for scaling)
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, tWidth, tHeight, 0,
GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, buf);
// Setup what to do when the texture has to be scaled
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER,
GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER,
GL11.GL_LINEAR);
return textureId;
}
public static int getDisplayList() {
return displayList;
}
public static int makeSphere() {
Sphere s = new Sphere(); // an LWJGL class for drawing sphere
s.setOrientation(GLU.GLU_INSIDE); // normals point inwards
s.setTextureFlag(true); // generate texture coords
displayList = GL11.glGenLists(1);
GL11.glNewList(displayList, GL11.GL_COMPILE);
{
GL11.glPushMatrix();
{
GL11.glBindTexture(GL11.GL_TEXTURE_2D, getTextureId());
//GL11.glTranslatef(0,0,0);
GL11.glRotatef(90f, 1,0,0); // rotate the sphere to align the axis vertically
s.draw(1, 48, 48); // run GL commands to draw sphere
}
GL11.glPopMatrix();
}
GL11.glEndList();
return displayList;
}
In initGL:
SkySphere.createShader();
SkySphere.loadTexture("textures/panorama2.png");
SkySphere.makeSphere();
Also I'm doing most of my work in framebuffers:
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, modelsFboId);
And in one occasion copy the depth to a texture:
glActiveTextureARB(GL_TEXTURE3_ARB);
glBindTexture(GL_TEXTURE_2D, modelsDepthTextureId);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, screenWidth, screenHeight);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
I used
glPushAttrib(GL_ALL_ATTRIB_BITS);
at the beginning and
glPopAttrib();
at the end to reset the OpenGL states each frame.
I want to scale an image using openGL, can anyone provide me with such code about how to do this ?
PS, I am using JGL as an openGL library for Java.
I will be brief on this, as you can find tutorials for pretty much every part of the solution.
You need to load your image from the disk, and create a texture.
You need to create a framebuffer object (FBO) with the desired target dimensions (in your case, double width, double height). Make the FBO active for rendering.
Render a fullscreen quad with your texture applied.
Read the result using glReadPixels().
And that's it ... the texturemapping hardware will take care of rescaling it for you. But it will likely be slower than if done on CPU, especially for "scaling 2x".
EDIT: as requested by OP, it is necessary to include source code.
So ... for loading an image in Java, you would go for this:
BufferedImage img;
try {
img = ImageIO.read(new File("myLargeImage.jpg"));
} catch (IOException e) { /* ... */ }
int w = img.getWidth(), h = img.getHeight();
For JGL, one might want to convert an image to an array:
byte [][][] imageData = new byte[img.getWidth()][img.getHeight()][3]; // alloc buffer for RGB data
for(int y = 0; y < h; ++ y) {
for(int x = 0; x < w; ++ x) {
int RGBA = img.getRGB(x, y);
imageData[x][y][0] = RGBA & 0xff;
imageData[x][y][1] = (RGBA >> 8) & 0xff;
imageData[x][y][2] = (RGBA >> 16) & 0xff;
}
}
Note that there could be alpha channel as well (for transparency) and that this will likely be quite slow. One could also use:
int[] rgbaData = img.GetRGB(0, 0, w, h, new int[w * h], 0, w);
But that doesn't return the data in the correct format expected by JGL. Tough luck.
Then you need to fill a texture:
int[] texId = {0};
gl.glGenTextures(1, texId); // generate texture id for your texture (can skip this line)
gl.glEnable(GL.GL_TEXTURE_2D);
gl.glBindTexture(GL.GL_TEXTURE_2D, texId[0]); // bind the texture
gl.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1); // set alignment of data in memory (a good thing to do before glTexImage)
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_CLAMP);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_CLAMP); // set clamp (GL_CLAMP_TO_EDGE would be better)
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR); // set linear filtering (so you can scale your image)
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB, w, h, 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, imageData); // upload image data to the texture
Once you have a texture, you can draw stuff. Let's resample your image:
int newW = ..., newH = ...; // fill-in your values
gl.glViewport(0, 0, newW, newH); // set viewport
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glLoadIdentity();
gl.glColor3f(1.0f, 1.0f, 1.0f); // set "white" color
gl.glDisable(GL.GL_CULL_FACE); // disable backface culling
gl.glDisable(GL.GL_LIGHTING); // disable lighting
gl.glDisable(GL.GL_DEPTH_TEST); // disable depth test
// setup OpenGL so that it renders texture colors without modification (some of that is not required by default)
gl.glBegin(GL_QUADS);
gl.glTexCoord2f(0.0f, 1.0f);
gl.glVertex2f(-1.0f, -1.0f);
gl.glTexCoord2f(1.0f, 1.0f);
gl.glVertex2f(+1.0f, -1.0f);
gl.glTexCoord2f(1.0f, 0.0f);
gl.glVertex2f(+1.0f, +1.0f);
gl.glTexCoord2f(0.0f, 0.0f);
gl.glVertex2f(-1.0f, +1.0f);
gl.glEnd();
// draw a fullscreen quad with your texture on it (scaling is performed here)
Now that the scaled image is rendered, all that needs to be done, is to download it.
byte[][][] newImageData = new byte[newW][newH][3];
gl.glPixelStorei(GL.GL_PACK_ALIGNMENT, 1); // set alignment of data in memory (this time pack alignment; a good thing to do before glReadPixels)
gl.glReadPixels(0, 0, newW, newH, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, newImageData);
And then the data can be converted to BufferedImage in similar way the input image img was converted to imageData.
Note that I used variable gl, that is an instance of the GL class. Put the code into some JGL example and it should work.
One word of caution, JGL doesn't seem to support framebuffer objects, so that you are limited by output image size by your OpenGL window size (attempting to create larger images will result in black borders). It can be solved by using multipass rendering (render your image tile by tile and assemble the full image in memory).