I was wondering if anyone could advise on a good pattern for loading textures in an Android Java & OpenGL ES app.
My first concern is determining how many texture names to allocate and how I can efficiently go about doing this prior to rendering my vertices.
My second concern is in loading the textures, I have to infer the texture to be loaded based on my game data. This means I'll be playing around with strings, which I understand is something I really shouldn't be doing in my GL thread.
Overall I understand what's happening when loading textures, I just want to get the best lifecycle out of it. Are there any other things I should be considering?
1) You should allocate as many texture names as you need. One for each texture you are using.
Loading a texture is a very heavy operation that stalls the rendering pipeline. So, you should never load textures inside your game loop. You should have a loading state before the application state in which you render the textures. The loading state is responsible for loading all the textures needed in the rendering. So when you need to render your geometry, you will have all the textures loaded and you don't have to worry about that anymore.
Note that after you don't need the textures anymore, you have to delete them using glDeleteTextures.
2) If you mean by infer that you need different textures for different levels or something similar, then you should process the level data in the loading state and decide which textures need to be loaded.
On the other hand, if you need to paint text (like current score), then things get more complicated in OpenGL. You will have the following options: prerender the needed text to textures (easy), implement your own bitmap font engine (harder) or use Bitmap and Canvas pair to generate textures on the fly (slow).
If you have limited set of messages to be shown during the game, then I would most probably prerender them to textures as the implementation is pretty trivial.
For the current score it is enough to have a texture which has a glyph for numbers from 0 to 9 and to use that to render arbitrary values. The implementation will be quite straightforward.
If you need longer localized texts then you need to start thinking about the generating textures on the fly. Basically you would create an bitmap into which you render text using a Canvas. Then you would upload it as a texture and render it as any other texture. After you don't need it any more, then you would delete it. This option is slow and should be avoided inside the application loop.
3) Concerning textures and to get the best out of the GPU you should keep at least the following things in your mind (these things will get a bit more advanced, and you should bother with them only after you get the application up and running and if you need to optimize the frame rate):
Minimize texture changes as it is a slow operation. Optimally you should render all the objects using the same texture in a batch. Then change the texture and render the objects
needing that and so on.
Use texture atlases to minimize the number of textures (and texture changes)
If you have lots of textures, you could need to use other bit depths than 8888 to make all your textures to fit in to the memory. Using lower bit depths may also improve performance.
This should be a comment to Lauri's answer, but i can't comment with 1 rep, and there's a thing that should be pointed out:
You should re-load textures every time your EGL context is lost (i.e. when your applications is put to background and back again). So, the correct location to (re)load them is in the method
public void onSurfaceChanged(GL10 gl, int width, int height)
of the renderer. Obviously, if you have different textures sets to be loaded based (i.e.) on the game level you're playing then when you change level you should delete the textures you're not going to use and load the new textures. Also, you have to keep track of what you have to re-load when the EGL context is lost.
Related
I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.
I thought about the best way to draw a picture in OpenGL / JOGL.
I currently program a Game and it is my goal to save the information about a picture in a text file instead of saving the picture.
My idea was to program a method that saves every pixel information (RGB) at the position of X and Y.
Then I draw every pixel and it is finished.
What you think about that idea?
You should simply use TextureIO to make a texture from your picture and use this texture with 4 vertices that have some texture coordinates while drawing. glReadPixels() is very slow, reading each pixel of a picture would take a lot of time, saving its content as a text file would require a lot of memory (saving it as a compressed image in a loss-less format like PNG might be worth a try), drawing each pixel one by one would be a lot slower than drawing a texture. derhass is right. You could vectorize your picture (make a SVG from it) but you would have to rasterize it after or you would have to implement some rendering of vectorized contents and it would be probably slower than using a texture. I'm not sure you really need an offscreen buffer.
I had a similar problem when I began working on my first person shooter. I wasn't using JOGL at the very beginning, I reused the source code of someone else, it relied on software rendering in an image, it was very slow. Then, I used JOGL to draw each pixel one by one instead of using Java2D, it was about 4 times faster on my machine but still very slow for me. At the end, I had to redesign the whole rendering to use OpenGL for what it is for as derhass would say, I used triangles, quads and textures. The performance became acceptable and this is what you should do, use OpenGL to draw primitives and clarify what you're trying to achieve so that we can help you a bit better.
Can graphics rendered using OpenGL work with graphics rendered not using OpenGL?
I am starting to learn OpenGL, but I am still shy when it comes to actually coding everything in OpenGL, I feel more comfortable drawing them out with JPanel or Canvas. I'm assuming that it wouldn't cause much issue code wise, but displaying it all at the same time could cause issues? Or am I stuck with one or the other?
Integrating OpenGL graphics with another non-OpenGL image or rendering boils down to compositing images. You can take a 2D image and load it as a texture in OpenGL, such that you can then use that texture to paint a surface in OpenGL, or as is suggested by your question, paint a background. Alternatively, you can use framebuffers in OpenGL to render an OpenGL scene to a texture, when can then be converted to a 2D bitmap and combined with another image.
There are limitations to this approach of course. Once an OpenGL scene has been moved to a 2D image, generally you lose all depth (it's possible to preserve depth in an additional channel in the image if you want to do that, but it would involve additional work).
In addition, since presumably you want one image to not simply overwrite the other, you're going to have to include an alpha (transparency) channel in one of your images, so that when you combine them, areas which haven't been drawn will end up showing the underlying image.
However, I would suggest you undertake the effort to simply find one rendering API that serves all your needs. The extra work you do to combine rendering output from two APIs is probably going to be wasted effort in the long run. It's one thing to embed an OpenGL control into an enclosing application that renders many of it's controls using a more conventional API like AWT. On the other hand, it's highly unusual to try to composite output from both OpenGL and another rendering API into the same output area.
Perhaps if you could provide a more concrete example of what kinds of rendering you're talking about, people could offer more helpful advice.
You're stuck with one or the other. You can't put them together.
I have a list of game objects that I want to render in libgdx 2D.
In my setup I have two different cameras, one that matches the pixel resolution exactly (for drawing tiled background and fonts) and one that matches the gameboard with a typical viewport of (-0.1, -0.1, 1.2, 1.5).
Now I would like each game object in the list to decide for themselves which camera to use when rendering, which means that the projection matrices for the SpriteBatch may change several times inside the SpriteBatch.begin()...end().
One other solution would be to have two SpriteBatches active at the same time and make the game objects select which spritebatch to render to, and then call End() on them after rendering is finished. Some other questions and posts I've found on the Internets suggest that multiple active SpriteBatches is not a good idea.
What are the pros and cons of these solutions? Is there a better one?
Design wise, thats awful!
I usually have 2 cameras aswell. One for the game objects with low values (12x8 or 15x10), and one for the HUD (attack button, stats, etc) with high values (pixel perfect of my test device). I first draw the game objects, finish the batch, and then I draw the HUD. Just 2 draws. But:
SpriteBatch#setProjectionMatrix
"If this is called inside a Batch.begin()/Batch.end() block, the current batch is flushed to the gpu."
If you set the projection matrix several times inside your render code, that will trigger a lot of batch flushes and negatively (and greatly) affect your performance.
I'd like to resize an image using OpenGL. I'd prefer it in Java, but C++ is OK, too.
I am really new to the whole thing so here's the process in words as I see it:
load the image as a texture into OGL
set some stuff, regarding state & filtering
draw the texture in different size onto another texture
get the texture data into an array or something
Do you think if it would be faster to use OpenGL and the GPU than using a CPU-based BLIT library?
Thanks!
Instead of rendering a quad into the destination FBO, you can simply use hardware blit functionality: glBlitFramebuffer. Its arguments are straight forward, but it requires a careful preparation of your source and destination FBO's:
ensure FBO's are complete (glCheckFramebufferStatus)
set read FBO target and write FBO target independently (glBindFramebuffer)
set draw buffers and read buffers (glDraw/ReadBuffers)
call glBlitFramebuffer, setting GL_LINEAR filter in the argument
I bet it will be much faster on GPU, especially for large images.
Depends, if the images are big, it might be faster using OpenGL. But if it's just doing the resize process and no more processing on the GPU side, then it's not worth it as is very likely that is going to be slower than the CPU.
But if you need to resize the image, and you can implement further processing in OpenGL, then is a worthy idea.