I'd like to resize an image using OpenGL. I'd prefer it in Java, but C++ is OK, too.
I am really new to the whole thing so here's the process in words as I see it:
load the image as a texture into OGL
set some stuff, regarding state & filtering
draw the texture in different size onto another texture
get the texture data into an array or something
Do you think if it would be faster to use OpenGL and the GPU than using a CPU-based BLIT library?
Thanks!
Instead of rendering a quad into the destination FBO, you can simply use hardware blit functionality: glBlitFramebuffer. Its arguments are straight forward, but it requires a careful preparation of your source and destination FBO's:
ensure FBO's are complete (glCheckFramebufferStatus)
set read FBO target and write FBO target independently (glBindFramebuffer)
set draw buffers and read buffers (glDraw/ReadBuffers)
call glBlitFramebuffer, setting GL_LINEAR filter in the argument
I bet it will be much faster on GPU, especially for large images.
Depends, if the images are big, it might be faster using OpenGL. But if it's just doing the resize process and no more processing on the GPU side, then it's not worth it as is very likely that is going to be slower than the CPU.
But if you need to resize the image, and you can implement further processing in OpenGL, then is a worthy idea.
Related
I am about to create a little 2D desktop game with libGdx and I want it to have that retro pixelated look you know from games like "Flappy Bird". To achieve that effect, I thought of the following:
Create a game window (e.g. 640x480)
Create a framebuffer half that size (i.e. 320x200)
Render everything to the framebuffer
Get the texture from the framebuffer
Draw the texture to the screen with SpriteBatch, scaling it 2 times up and using TextureFilter.Nearest.
I know I could scale each sprite individually with SpriteBatch.draw() but I thought, rendering everything at its original resolution and just scale up the final composition might be easier.
So would the above technique be an appropriate way of getting that pixelated look?
What you have in mind sounds like a perfectly fine approach. The downside is that it does involve an additional data copy, but on the other hand your original rendering is for only 1/4 of the pixels, which saves you quite a bit of rendering overhead.
In plain OpenGL, you could use glBlitFramebuffer() for step 5. This requires OpenGL 3.0 or higher. It's essentially the same operation as drawing a textured quad, but it's a single call, and the underlying implementation could potentially be more efficient.
I thought about the best way to draw a picture in OpenGL / JOGL.
I currently program a Game and it is my goal to save the information about a picture in a text file instead of saving the picture.
My idea was to program a method that saves every pixel information (RGB) at the position of X and Y.
Then I draw every pixel and it is finished.
What you think about that idea?
You should simply use TextureIO to make a texture from your picture and use this texture with 4 vertices that have some texture coordinates while drawing. glReadPixels() is very slow, reading each pixel of a picture would take a lot of time, saving its content as a text file would require a lot of memory (saving it as a compressed image in a loss-less format like PNG might be worth a try), drawing each pixel one by one would be a lot slower than drawing a texture. derhass is right. You could vectorize your picture (make a SVG from it) but you would have to rasterize it after or you would have to implement some rendering of vectorized contents and it would be probably slower than using a texture. I'm not sure you really need an offscreen buffer.
I had a similar problem when I began working on my first person shooter. I wasn't using JOGL at the very beginning, I reused the source code of someone else, it relied on software rendering in an image, it was very slow. Then, I used JOGL to draw each pixel one by one instead of using Java2D, it was about 4 times faster on my machine but still very slow for me. At the end, I had to redesign the whole rendering to use OpenGL for what it is for as derhass would say, I used triangles, quads and textures. The performance became acceptable and this is what you should do, use OpenGL to draw primitives and clarify what you're trying to achieve so that we can help you a bit better.
Can graphics rendered using OpenGL work with graphics rendered not using OpenGL?
I am starting to learn OpenGL, but I am still shy when it comes to actually coding everything in OpenGL, I feel more comfortable drawing them out with JPanel or Canvas. I'm assuming that it wouldn't cause much issue code wise, but displaying it all at the same time could cause issues? Or am I stuck with one or the other?
Integrating OpenGL graphics with another non-OpenGL image or rendering boils down to compositing images. You can take a 2D image and load it as a texture in OpenGL, such that you can then use that texture to paint a surface in OpenGL, or as is suggested by your question, paint a background. Alternatively, you can use framebuffers in OpenGL to render an OpenGL scene to a texture, when can then be converted to a 2D bitmap and combined with another image.
There are limitations to this approach of course. Once an OpenGL scene has been moved to a 2D image, generally you lose all depth (it's possible to preserve depth in an additional channel in the image if you want to do that, but it would involve additional work).
In addition, since presumably you want one image to not simply overwrite the other, you're going to have to include an alpha (transparency) channel in one of your images, so that when you combine them, areas which haven't been drawn will end up showing the underlying image.
However, I would suggest you undertake the effort to simply find one rendering API that serves all your needs. The extra work you do to combine rendering output from two APIs is probably going to be wasted effort in the long run. It's one thing to embed an OpenGL control into an enclosing application that renders many of it's controls using a more conventional API like AWT. On the other hand, it's highly unusual to try to composite output from both OpenGL and another rendering API into the same output area.
Perhaps if you could provide a more concrete example of what kinds of rendering you're talking about, people could offer more helpful advice.
You're stuck with one or the other. You can't put them together.
I'm creating a java implantation of http://alteredqualia.com/visualization/evolve/, as a hobby project. I'm using HW-accelerated Graphics2D to draw the polygons on a volatile image, I then want to create a texture from the volatileImage so I can use glReadPixels to compare the generated image to the original (which is also a texture).
I spent the last 2 hours spitting through various Textures documentations, but there doesn't seem to an easy way to create a texture out of a volatileImage. Did I miss something here, or is this just not possible? I know you can convert the volatileImage to a BufferedImage and then create the Texture, but this method is very slow. Which is a bad thing considering performance is crucial for this program.
There is no direct way because a VolatileImage has no API for obtaining the image data, except by making a copy using snapshot().
In practice, simply use a BufferedImage from the start - there is some magic under the hood of BufferedImage that will make use of hardware acceleration where possible. One thing you must avoid is obtaining a reference of a BufferedImage's DataBuffer, that may break the acceleration.
I was wondering if anyone could advise on a good pattern for loading textures in an Android Java & OpenGL ES app.
My first concern is determining how many texture names to allocate and how I can efficiently go about doing this prior to rendering my vertices.
My second concern is in loading the textures, I have to infer the texture to be loaded based on my game data. This means I'll be playing around with strings, which I understand is something I really shouldn't be doing in my GL thread.
Overall I understand what's happening when loading textures, I just want to get the best lifecycle out of it. Are there any other things I should be considering?
1) You should allocate as many texture names as you need. One for each texture you are using.
Loading a texture is a very heavy operation that stalls the rendering pipeline. So, you should never load textures inside your game loop. You should have a loading state before the application state in which you render the textures. The loading state is responsible for loading all the textures needed in the rendering. So when you need to render your geometry, you will have all the textures loaded and you don't have to worry about that anymore.
Note that after you don't need the textures anymore, you have to delete them using glDeleteTextures.
2) If you mean by infer that you need different textures for different levels or something similar, then you should process the level data in the loading state and decide which textures need to be loaded.
On the other hand, if you need to paint text (like current score), then things get more complicated in OpenGL. You will have the following options: prerender the needed text to textures (easy), implement your own bitmap font engine (harder) or use Bitmap and Canvas pair to generate textures on the fly (slow).
If you have limited set of messages to be shown during the game, then I would most probably prerender them to textures as the implementation is pretty trivial.
For the current score it is enough to have a texture which has a glyph for numbers from 0 to 9 and to use that to render arbitrary values. The implementation will be quite straightforward.
If you need longer localized texts then you need to start thinking about the generating textures on the fly. Basically you would create an bitmap into which you render text using a Canvas. Then you would upload it as a texture and render it as any other texture. After you don't need it any more, then you would delete it. This option is slow and should be avoided inside the application loop.
3) Concerning textures and to get the best out of the GPU you should keep at least the following things in your mind (these things will get a bit more advanced, and you should bother with them only after you get the application up and running and if you need to optimize the frame rate):
Minimize texture changes as it is a slow operation. Optimally you should render all the objects using the same texture in a batch. Then change the texture and render the objects
needing that and so on.
Use texture atlases to minimize the number of textures (and texture changes)
If you have lots of textures, you could need to use other bit depths than 8888 to make all your textures to fit in to the memory. Using lower bit depths may also improve performance.
This should be a comment to Lauri's answer, but i can't comment with 1 rep, and there's a thing that should be pointed out:
You should re-load textures every time your EGL context is lost (i.e. when your applications is put to background and back again). So, the correct location to (re)load them is in the method
public void onSurfaceChanged(GL10 gl, int width, int height)
of the renderer. Obviously, if you have different textures sets to be loaded based (i.e.) on the game level you're playing then when you change level you should delete the textures you're not going to use and load the new textures. Also, you have to keep track of what you have to re-load when the EGL context is lost.