I'm trying to call the glTexImage2D function using the OpenGL library. I'm using the LWJGL as the framework to use OpenGL in Java.
According to the documentation, this method accepts the following parameters:
public static void glTexImage2D(int target,
int level,
int internalformat,
int width,
int height,
int border,
int format,
int type,
java.nio.ByteBuffer pixels)
My implementation of this is below.
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, 1092, 1092, 0, GL11.GL_RGB, GL11.GL_INT, imageData);
However, I am getting an error:
Exception in thread "main" java.lang.IllegalArgumentException: Number of remaining buffer elements is 3577392, must be at least 14309568. Because at most 14309568 elements can be returned, a buffer with at least 14309568 elements is required, regardless of actual returned element count
at org.lwjgl.BufferChecks.throwBufferSizeException(BufferChecks.java:162)
at org.lwjgl.BufferChecks.checkBufferSize(BufferChecks.java:189)
at org.lwjgl.BufferChecks.checkBuffer(BufferChecks.java:230)
at org.lwjgl.opengl.GL11.glTexImage2D(GL11.java:2855)
at TextureLab.testTexture(TextureLab.java:100)
at TextureLab.start(TextureLab.java:39)
at TextureLab.main(TextureLab.java:20)
I've done allot of querying, and I assume my method of creating a ByteBuffer for the last parameter is what is causing the issue.
My code implementation for getting a ByteBuffer form an is as follows:
img = ImageIO.read(file);
byte[] pixels = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
ByteBuffer buffer = BufferUtils.createByteBuffer(pixels.length);
buffer.put(pixels);
buffer.flip();
buffer.rewind();
I've substituted the length of the buffer for the width*height*4 and even hardcoded to the number contained in the error, all to no luck. Any ideas what I'm doing wrong? I think the issue is in my ByteBuffer, but even that I'm not sure of.
The lwjgl layer is telling you that your buffer should be at least 14309568 bytes big, but you provide only 3577392. The reason for this is that you used GL_INT as the format parameter of the glTexImage2D call, so each pixel is assumed to be represented by 3 4-byte integer values bz the GL.
You just want to use GL_UNSIGNED_BYTE for typical 8 bit per channel image content, which exaclty maps to the 3577392 bytes you are currently providing.
Related
I have two ByteBuffers, they are completely identical but somehow behave differently when used in the GL11.glTexImage2D(); method.
My code:
IntBuffer width = BufferUtils.createIntBuffer(1);
IntBuffer height = BufferUtils.createIntBuffer(1);
IntBuffer comp = BufferUtils.createIntBuffer(1);
ByteBuffer data = STBImage.stbi_load("dsc8x12.bmp", width, height, comp, 4);
byte[] bytes = new byte[data.limit()];
for (int i = 0; i < bytes.length; i++) {
bytes[i] = data.get(i);
}
ByteBuffer data2 = ByteBuffer.wrap(bytes);
System.out.println(data.equals(data2));
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA, 1024, 12, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, data);
The console output is true
Here is how the window looks when the final argument in the GL11.glTexImage2D(); is either data or data2
When data is used:
When data2 is used:
The reason why you get two different results with the two different ByteBuffers is that LWJGL only works with direct NIO Buffers, that is, ByteBuffers that are not backed by arrays.
The reason for this particular output that you are getting is two-fold:
the non-direct ByteBuffer you supply (the one being created by .wrap(array)) has an internal address field value of 0. This is what LWJGL looks at when it actually calls the native OpenGL function. It reads that address field value and expects this to be the virtual memory address of the client memory to upload to the texture.
Now, there are some OpenGL functions that will just SEGFAULT when given an invalid pointer, such as zero. However, in this case, the glTexImage2D function is actually semantically overloaded, as when it receives the zero address as the last argument, it will only allocate texture memory of the requested size (here the remaining size of the ByteBuffer) and not move any client memory to the server (i.e. the GPU).
The ByteBuffer returned by STBImage.stbi_load is direct, and hence, the correct texture memory from the client's virtual memory address space is uploaded to OpenGL.
So, in essence: When you use LWJGL, you must always only use direct NIO Buffers, not ones that are wrappers of arrays!
I need to write a resampling function that takes an input image and generates an output image in Java.
The image type is TYPE_BYTE_GRAY.
As all pixels will be read and written, I need an efficient method to access the image buffer(s).
I don't trust that methods like getRGB/setRGB will be appropriate as they will perform conversions. I am after functions that will allow me the most direct access to the stored buffer, with efficient address computation, no image copy and minimum overhead.
Can you help me ? I have found examples of many kinds, for instance using a WritableRaster, but nothing sufficiently complete.
Update:
As suggested by #FiReTiTi, the trick is to get a WritableRaster from the image and get its associated buffer as a DataBufferByte object.
DataBufferByte SrcBuffer= (DataBufferByte)Src.getRaster().getDataBuffer();
Then you have the option to directly access the buffer using its getElem/setElem methods
SrcBuffer.setElem(i, getElem(i) + 1);
or to extract an array of bytes
byte [] SrcBytes= SrcBuffer.getData();
SrcBytes[i]= SrcBytes[i] + 1;
Both methods work. I don't know yet it there's a difference in performance...
The easiest way (but not the fastest) is to use the Raster myimage.getRaster(), and then use the methods getSample(x,y,c) and setSample(x,y,c,v) to access and modify the pixels values.
The fastest way to do it is to access the DataBuffer (direct access to the array representing the image), so for a TYPE_BYTE_GRAY BufferedImage, it would be byte[] buffer = ((DataBufferByte)myimage.getRaster().getDataBuffer()).getData(). Just be careful that the pixels are encoded on byte and not unsigned byte, so every time you want to read a pixel value, you have to do buffer[x] & 0xFF.
Here is a simple test:
BufferedImage image = new BufferedImage(256, 256, BufferedImage.TYPE_BYTE_GRAY) ;
byte[] buffer = ((DataBufferByte)image.getRaster().getDataBuffer()).getData() ;
System.out.println("buffer[0] = " + (buffer[0] & 0xFF)) ;
buffer[0] = 1 ;
System.out.println("buffer[0] = " + (buffer[0] & 0xFF)) ;
And here is the outputs:
buffer[0] = 0
buffer[0] = 1
It is possible to get the underlying buffer yourimage.getData().getDataBuffer() but it will require some conversion since this is one long array. You could find order of pixels by setting some elements to a extreme value and render the picture to see how the pixels are affected.
I keep getting this exception:
Exception in thread "main" org.lwjgl.opengl.OpenGLException: Cannot use offsets when Array Buffer Object is disabled
at org.lwjgl.opengl.GLChecks.ensureArrayVBOenabled(GLChecks.java:93)
at org.lwjgl.opengl.GL11.glVertexPointer(GL11.java:2680)
at Joehot200.TerrainDemo.render(TerrainDemo.java:2074)
at Joehot200.TerrainDemo.enterGameLoop(TerrainDemo.java:3266)
at Joehot200.TerrainDemo.startGame(TerrainDemo.java:3490)
at StartScreenExperiments.Test2.resartTDemo(Test2.java:55)
at StartScreenExperiments.Test2.main(Test2.java:41)
However, the array buffer object IS enabled!
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vboVertexHandle);
glVertexPointer(3, GL_FLOAT, 0, 0L);
As you can see, two lines before the glVertexPointer call (the one that the error is at), then I am clearly enabling the array buffer!
What is wrong here?
Vertex Buffers are not something you enable or disable - LWJGL is misleading you.
You need to undertand that the glVertexPointer command uses whatever is bound to GL_ARRAY_BUFFER ("Array Buffer Object") as its memory source (beginning with OpenGL 1.5).
In certain versions of OpenGL (1.5-3.0 and 3.1+ compatibility) if you have 0 bound to GL_ARRAY_BUFFER, then the last parameter to glVertexPointer is an actual pointer to your program's memory (client memory) rather than an offset into GPU memory (server memory). Core OpenGL 3.1+ does not even support client-side vertex storage, so that last parameter is always an offset.
LWJGL's error message is poorly worded:
Exception in thread "main" org.lwjgl.opengl.OpenGLException: Cannot use offsets when Array Buffer Object is disabled.
The error message really means that you have 0 bound to GL_ARRAY_BUFFER when you call glVertexPointer (...). LWJGL apparently considers Array Buffer Objects "disabled" whenever nothing is bound to GL_ARRAY_BUFFER. That is not too unreasonable, but it does lead you to believe that this is a state that can be enabled or disabled using glEnable or glDisable; it is not.
Remember how I described the last parameter to glVertexPointer as an offset when you have something bound to GL_ARRAY_BUFFER? Since LWJGL is Java-based, there is no way to pass an arbitrary memory address as an integer. An integer value passed to glVertexPointer (...) must be an offset into the currently bound vertex buffer's memory.
Client-side vertex specification (unsupported in core GL 3.1+)
void glVertexPointer(int size, int type, int stride, java.nio.ByteBuffer pointer);
Server-side vertex specification (takes an offset into GL_ARRAY_BUFFER)
void glVertexPointer(int size, int type, int stride, long pointer_buffer_offset);
As you can see, there is an alternate form of the glVertexPointer function in LWJGL that can take memory not stored in a buffer object, where you pass a specialization of java.nio.Buffer. That is the form you are expected to use when you have no vertex buffer bound and that is what the error message is really telling you.
That explains what the error message you are seeing actually means, but not its cause.
For some reason vboVertexHandle appears to be 0 or some value not generated using glGenBuffers (...) in your application. Showing the code where you initialize the VBO would be helfpul.
GL_ARRAY_BUFFER is not one of the allowed values to glEnable. If you want to attach a vertex buffer object to the vertex pointer, you have to enable it using the glEnableClientState method:
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vboVertexHandle);
glVertexPointer(3, GL_FLOAT, 0, 0L); //This is line 2074 of the TerrainDemo class.
Side note: This functionality is deprecated since OpenGL 3 Core Profile. If there is no restriction on sticking to this old OpenGL version, it would be a good idea to start with modern OpenGL (especially since you're already using vbos).
I'm having a really bad time dealing with RGB values in Java, which made me start trying small experiments with this.
I came down to this: loading an image, get it's rgb values and creating a new image with the same values. Unfortunately, this does not work (the images are displayed differently, see picture), as per the following code... Can some one see what's wrong?
BufferedImage oriImage=ImageIO.read(new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
int[] oriImageAsIntArray = new int[oriImage.getWidth()*oriImage.getHeight()];
oriImage.getRGB(0, 0, oriImage.getWidth(),oriImage.getHeight(), oriImageAsIntArray, 0, 1);
BufferedImage bfImage= new BufferedImage(oriImage.getWidth(),oriImage.getHeight(),
BufferedImage.TYPE_INT_ARGB);
bfImage.setRGB(0,0,bfImage.getWidth(),bfImage.getHeight(),oriImageAsIntArray, 0, 1);
Apparently, getRGB and setRGB were not being used correctly.
I changed the code to:
oriImage.getRGB(0, 0, oriImage.getWidth(),oriImage.getHeight(), oriImageAsIntArray, 0, oriImage.getWidth());
(...)
bfImage.setRGB(0,0,bfImage.getWidth(),bfImage.getHeight(),oriImageAsIntArray, 0, bfImage.getWidth());
... and the picture displayed correctly. I still do not understand what this last argument is. In the JavaDoc, it is described as:
scansize - scanline stride for the rgbArray
Is there a good way to combine ByteBuffer & FloatBuffer ?
For example I get byte[] data and I need to convert it to float[] data and vice versa :
byte[] to float[] (java.lang.UnsupportedOperationException) :
byte[] bytes = new bytes[N];
ByteBuffer.wrap(bytes).asFloatBuffer().array();
float[] to byte[] (works) :
float[] floats = new float[N];
FloatBuffer floatBuffer = FloatBuffer.wrap(floats);
ByteBuffer byteBuffer = ByteBuffer.allocate(floatBuffer.capacity() * 4);
byteBuffer.asFloatBuffer().put(floats);
byte[] bytes = byteBuffer.array();
array() is an optional operation for ByteBuffer and FloatBuffer, to be supported only when the backing Buffer is actually implemented on top of an array with the appropriate type.
Instead, use get to read the contents of the buffer into an array, when you don't know how the buffer is actually implemented.
To add to Louis's answer, arrays in Java are limited in that they must be an independent region of memory. It is not possible to have an array that is a view of another array, whether to point at some offset in another array or to reinterpret the bytes of another array as a different type.
Buffers (ByteBuffer, FloatBuffer, etc) were created to overcome this limitation. They are equivalent to arrays in that they compile into machine instructions that are as fast as array accesses, despite requiring the programmer to use what appear to be function calls.
For top performance and minimum memory usage, you should use ByteBuffer.wrap(bytes).asFloatBuffer() and then call get() and put().
To get a float array, you must allocate a new array and copy the data into it with ByteBuffer.wrap(bytes).asFloatBuffer().get(myFloatArray).
The array method of a ByteBuffer is not something that anyone should normally use. It will fail unless the Buffer is wrapping an array (instead of pointing to some memory-mapped region like a file or raw non-GC memory) and the array is of the same type as the buffer.