I saw this Google IO session: http://code.google.com/intl/iw/events/io/2009/sessions/WritingRealTimeGamesAndroid.html
He says that the draw_texture function is the fastest and VBO is 2nd faster.
But I don't understand how to use it(the draw_texture method or the VBO way).
Any suggestion?
The source code for the sprite method test mentioned in the video is available here:
http://code.google.com/p/apps-for-android/source/browse/#svn/trunk/SpriteMethodTest
Here is an example from that where a VBO is used:
http://code.google.com/p/apps-for-android/source/browse/SpriteMethodTest/src/com/android/spritemethodtest/Grid.java#237
Here is an example from that where the draw texture extension is used:
http://code.google.com/p/apps-for-android/source/browse/SpriteMethodTest/src/com/android/spritemethodtest/GLSprite.java
One thing to watch out for, however, is that the draw texture extension is actually not the fastest for all games using sprites. Many games have groups of sprites that all share the same render state, for example. In that case it is much faster to put all sprites with the same render state in the same buffer and draw them with the same draw command. The draw texture command doesn't allow this. With the draw texture command you have to call it once per sprite.
This is the reason that atlas textures are often used. An atlas texture is a single bound texture object that has many different images in it. You can draw sprites with different images that way without having to bind to a different texture object. All you do is have them use different texture coordinates into the atlas texture. If the other render state is the same as well, such as the blending function needed, then you can draw the sprites together for better performance.
Here are some great Android OpenGL ES working examples.
http://code.google.com/p/android-gl/
Related
I'm programming an Android App where a grid is drawn, which you can move around and move in it's direction. The grid consists of about 2000 to 5000 quads, each with a different texture. I defined 4 vertices and use an index buffer to draw each quad. Before drawing I position it using a model matrix. As you can move in my scene I use view frustum culling, which increases the performance in some situations. Unfortunately there might be the case where I will need to draw all of the quads, so I want to ask how I prevent slow drawing.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024). I think calling glDrawElements() for each squad is what makes me slow, but I don't know how I can change it.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
I look forward for any kind of help.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024).
You can fit 64 256x256 textures into a 2048x2048 atlas, that's a huge amount, so you should definitely atlas. Even getting 4 1024x1024 onto a 2048x2048 is worth doing, it can quarter your draw call count.
And as WLGfx says in the comments to your question, you should batch up any quads that use the same texture (with atlasing there will be a lot more of these).
I think this would be enough, but you still might have a pretty high drawcall count in your fully zoomed-out view. After implementing atlasing and batching, if performance here is still a problem, you could create a separate asset set of thumb-nail textures at, say, quarter resolution (so a 256x256 becomes 64x64). This thumbnail asset set would fit onto just a handful of 2048x2048 atlas sheets, and you could switch to it when zoomed out far enough.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
This could work, as long as your scene is very static, if the quads are moving/changing every frame, then it might not help. Also, there might be a noticeable framerate hitch when you have to do the full redraw.
I am about to create a little 2D desktop game with libGdx and I want it to have that retro pixelated look you know from games like "Flappy Bird". To achieve that effect, I thought of the following:
Create a game window (e.g. 640x480)
Create a framebuffer half that size (i.e. 320x200)
Render everything to the framebuffer
Get the texture from the framebuffer
Draw the texture to the screen with SpriteBatch, scaling it 2 times up and using TextureFilter.Nearest.
I know I could scale each sprite individually with SpriteBatch.draw() but I thought, rendering everything at its original resolution and just scale up the final composition might be easier.
So would the above technique be an appropriate way of getting that pixelated look?
What you have in mind sounds like a perfectly fine approach. The downside is that it does involve an additional data copy, but on the other hand your original rendering is for only 1/4 of the pixels, which saves you quite a bit of rendering overhead.
In plain OpenGL, you could use glBlitFramebuffer() for step 5. This requires OpenGL 3.0 or higher. It's essentially the same operation as drawing a textured quad, but it's a single call, and the underlying implementation could potentially be more efficient.
I thought about the best way to draw a picture in OpenGL / JOGL.
I currently program a Game and it is my goal to save the information about a picture in a text file instead of saving the picture.
My idea was to program a method that saves every pixel information (RGB) at the position of X and Y.
Then I draw every pixel and it is finished.
What you think about that idea?
You should simply use TextureIO to make a texture from your picture and use this texture with 4 vertices that have some texture coordinates while drawing. glReadPixels() is very slow, reading each pixel of a picture would take a lot of time, saving its content as a text file would require a lot of memory (saving it as a compressed image in a loss-less format like PNG might be worth a try), drawing each pixel one by one would be a lot slower than drawing a texture. derhass is right. You could vectorize your picture (make a SVG from it) but you would have to rasterize it after or you would have to implement some rendering of vectorized contents and it would be probably slower than using a texture. I'm not sure you really need an offscreen buffer.
I had a similar problem when I began working on my first person shooter. I wasn't using JOGL at the very beginning, I reused the source code of someone else, it relied on software rendering in an image, it was very slow. Then, I used JOGL to draw each pixel one by one instead of using Java2D, it was about 4 times faster on my machine but still very slow for me. At the end, I had to redesign the whole rendering to use OpenGL for what it is for as derhass would say, I used triangles, quads and textures. The performance became acceptable and this is what you should do, use OpenGL to draw primitives and clarify what you're trying to achieve so that we can help you a bit better.
I'm currently rendering a cube with VBOs, and with color. I want to drop some texture on them, and I don't have any idea about it. Still learning OGL, but how can I render the cube with texture on them? I need another VBO for texture?
Thanks!
Yeah, you need another FloatBuffer, not to mention a texture.
When mapping a texture onto an object, you need to use texture coordinates (How do opengl texture coordinates work?). Use glTexCoordPointer() to map your texture onto the cube.
You can load textures with the slick library that comes with LWJGL.
I've been trying to port my previous game from C# to Java. I'm wondering how I can create graphics layers that I can draw tiles on.
Besides the depth buffer, the color buffer and the stencil buffer you can use Frame Buffer Object(FBO) http://www.songho.ca/opengl/gl_fbo.html.
It can be used as a drawing destination, for example to make a mirror you first render the mirror point of view onto some temporary texture and then you render the mirror with this texture, same way you can make a texture for each layer so you can draw exactly on the layer you need and in the end render all the layers in different heights (or what you want to do with them).
Or like Tim commented simply when you want to draw something on layer 'n' you render it on height z=n but that way you wont have a physical layer image but all of them combined, so if you need them for some after image processing (special effects on different layers) or saving them as images you should use FBO. But in some cases you can just apply different shaders when drawing on different layer.
FBO is harder to use but a lot powerful tool.
For me works best (in 2D games):
Z-Buffer: first setup Z-buffer and when you draw you define each time a Z value and that's it (but fails at half transparent objects)
knowing the draw order: draw the low layer first, top layer last (slower than z-Buffer)