Should TextureRegions in an atlas be power of 2? - java

It is my understanding that images should be power of 2 for optimization with the GPU. However, if I'm packing my textures into a sheet for libGDX, can the atlas be a power of 2 and NOT the actual regions? Or should the atlas sheet be a power of 2 and each Texture Region also be a power of 2.

I understand that it's not. As far as I know the reason for the need of POT (power of two) textures is for optimizations things like mipmapping, anisotropic filtering, etc. So when you pack many images in one texture then you upload that entire texture to the GPU, so the GPU can perform the desired optimizations for all the texture. The atlas is like an index to lookup for certain sectors (or "pieces") of your texture (texture regions), so you can retrieve the desired regions easily. Those regions don't need to be POW, because the GPU already perform optimizations for the entire set (the entire texture). I think that you should ask these type of questions in the game dev community, so you will have a deeper response.

Related

Changing 3d model texture based on resolution

As far as I know, you can get the screen size and density using Gdx.graphics.getDensity(), so you can load the right texture for E.g 1x, 1.5x etc..
but what about the texture that comes with the 3D model, for E.g. the texture is only intended for a maximum 1280x800px, while my android has dpi 3x.
I don't want to scale it too much because it can cause the image become too blur/fade/not sharp, anyone who knows the solution please?
EDIT:
let me explain in detail
I've one ModelInstance, texture atlas (2048x2048px) attached.
When the games is opened in 4k screen, I widen the scale of the model almost three times, causing the texture to become blurry, that makes sense because from 240dpi to 640dpi the difference is very far.
so in my opinion the solution is to make some textures atlas for 240dpi, 320dpi, 480dpi etc. the problem is I don't know how to replace the texture atlas which from the beginning already integrated with the Model? so when scaling up, texture atlas is automatically replaced with a higher one. thanks
Usually in 3D graphics the camera or model are mobile. There isn't a fixed best resolution of a texture because the camera may be very near, very far away or viewing the textured surface at a glancing angle.
The solution offered by graphics APIs are settings for texture filtering. Under magnification (where a texel takes up more than a screen pixel) you can do linear filtering for soft edges or point filtering for hard edges. Minification is more complex, you can have linear or point filtering, but you can also have mipmaps, which are a precalculated chain of successively half-sized versions of your image, typically all the way down to 1x1. You can set texture filtering to pick the nearest mipmap, blend between mipmaps, or use anisotropic filtering for better sharpness at glancing angles. Generally linear filtering for magnification, and full mipmap chains with anisotropic filtering for minification produces very good quality and good enough performance to be a good default choice.
So, you won't be giving the GPU a single texture for your model, you'll be giving it a chain of textures, and letting the GPU worry about how to sample that chain to give the correct amount of blur/sharpness. For performance and compatibility with mipmaps, it is usually a good idea to use power-of-two textures (e.g. 1024x1024 rather than 1280x800).
So, just make a 1024x1024 or 2048x2048 texture with mipmaps and appropriate filtering settings then use it on every device regardless of its resolution then quality-wise you're sorted.
If you're particularly worried about memory use or load times then there's an argument to reduce the texture size on lower resolution devices (basically, have a second asset with halved resolution for low-res devices, or just skip the highest resolution mip when loading for low-res devices), but I think that might be a premature optimization at this stage.

Drawing many textured squads OpenGL ES 2.0

I'm programming an Android App where a grid is drawn, which you can move around and move in it's direction. The grid consists of about 2000 to 5000 quads, each with a different texture. I defined 4 vertices and use an index buffer to draw each quad. Before drawing I position it using a model matrix. As you can move in my scene I use view frustum culling, which increases the performance in some situations. Unfortunately there might be the case where I will need to draw all of the quads, so I want to ask how I prevent slow drawing.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024). I think calling glDrawElements() for each squad is what makes me slow, but I don't know how I can change it.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
I look forward for any kind of help.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024).
You can fit 64 256x256 textures into a 2048x2048 atlas, that's a huge amount, so you should definitely atlas. Even getting 4 1024x1024 onto a 2048x2048 is worth doing, it can quarter your draw call count.
And as WLGfx says in the comments to your question, you should batch up any quads that use the same texture (with atlasing there will be a lot more of these).
I think this would be enough, but you still might have a pretty high drawcall count in your fully zoomed-out view. After implementing atlasing and batching, if performance here is still a problem, you could create a separate asset set of thumb-nail textures at, say, quarter resolution (so a 256x256 becomes 64x64). This thumbnail asset set would fit onto just a handful of 2048x2048 atlas sheets, and you could switch to it when zoomed out far enough.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
This could work, as long as your scene is very static, if the quads are moving/changing every frame, then it might not help. Also, there might be a noticeable framerate hitch when you have to do the full redraw.

Ideal number of spritesheets for LWJGL/OpenGL

I am making a game using LWJGL, and so far I decided to have 5 or 6 sprite sheets. It is like one for the blocks, one for the items, objects, etc. By doing that, I have sprite sheets with sprites that are closely related, and normally have the same size (in the case of blocks and items). But is this the better way? Or should I just throw everything on a single sprite sheet, with no organization whatsoever?
If I am doing it the right way, there is also another problem. For example, when you are in the map, I need to draw the blocks. But over the blocks, there can be electrical wires and other stuff - that are in a separated sprite sheet. This information, however, is stored in the same array. So normally I just iterate over it once, and each time, draw the block, and then the wire over it - switching sprite sheets twice each iteration. But I thought it might take some time to switch these, so maybe it would be more interesting to run the thing twice, first draw all the blocks, and then iterate again to draw the wires? To change the textures, I am using the SlickUtil Texture class, which has a bind method - really easy to use.
There is no "ideal"; there are simply the factors that matter for your needs.
Remember the reason why you use sprite sheets at all: because switching textures is too expensive to do per-object when dealing with 2D rendering. So as long as you're not switching textures for each sprite you render, you'll already be ahead of the game performance-wise.
The other considerations you need to take into account are:
Minimum user hardware specifications. Specifically, what is the minimum size of GL_MAX_TEXTURE_SIZE you want your code to work on? The larger your sprite sheets get, the greater your hardware requirements, since a single sprite sheet must be a texture.
This value is hardware-dependent, but there are some general requirements. OpenGL 3.3 requires 1024 at a minimum; pretty much every piece of GL 3.3 hardware gives 4096. OpenGL 4.3 requires a massive 16384, which is approaching the theoretical limits of floating-point texture coordinate capacity (assuming you want at least 8 bits of sub-pixel texture coordinate precision).
GL 2.1 has a minimum requirement of 64, but any actual 2.1 hardware people will have will offer between 512 and 2048. So pick your sprite sheet size based on this.
What you're rendering. You want to be able to render as much as possible from one sprite sheet. What you want to avoid is frequent texture switches. If your world is divided into layers, if you can fit your sprites for each layer onto their own sheet, you're doing fine. No hardware is going to choke on 20 texture changes; it's tens of thousands that are the problem.
The main thing is to render everything that uses a sheet all at once. Not necessarily in the same render call; you can switch meshes and shader uniforms/fixed-function state. But you shouldn't switch sheets between these renders until you've rendered everything needed for that sheet.

Horrible performance loss when using Opengl FBO

I have successfully implemented a simple 2-d game using lwjgl (opengl) where objects fade away as they get further away from the player. This fading was initially implemented by computing distance to origin of each object from the player and using this to scale the objects alpha/opacity.
However when using larger objects, this approach appears a bit too rough. My solution was to implement alpha/opacity scaling for every pixel in the object. Not only would this look better, but it would also move computation time from CPU to GPU.
I figured I could implement it using an FBO and a temporary texture.
By drawing to the FBO and masking it with a precomputed distance map (a texture) using a special blend mode, I intended to achieve the effect.
The algorithm is like so:
0) Initialize opengl and setup FBO
1) Render background to standard buffer
2) Switch to custom FBO and clear it
3) Render objects (to FBO)
4) Mask FBO using distance-texture
5) Switch to standard buffer
6) Render FBO temporary texture (to standard buffer)
7) Render hud elements
A bit of extra info:
The temporary texture has the same size as the window (and thus standard buffer)
Step 4 uses a special blend mode to achieve the desired effect:
GL11.glBlendFunc( GL11.GL_ZERO, GL11.GL_SRC_ALPHA );
My temporary texture is created with min/mag filters: GL11.GL_NEAREST
The data is allocated using: org.lwjgl.BufferUtils.createByteBuffer(4 * width * height);
The texture is initialized using:
GL11.glTexImage2D( GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA, width, height, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, dataBuffer);
There are no GL errors in my code.
This does indeed achieve the desired results.
However when I did a bit of performance testing I found that my FBO approach cripples performance. I tested by requesting 1000 successive renders and measuring the time. The results were as following:
In 512x512 resolution:
Normal: ~1.7s
FBO: ~2.5s
(FBO -step 6: ~1.7s)
(FBO -step 4: ~1.7s)
In 1680x1050 resolution:
Normal: ~1.7s
FBO: ~7s
(FBO -step 6: ~3.5s)
(FBO -step 4: ~6.0s)
As you can see, this scales really badly. To make it even worse, I'm intending to do a second pass of this type. The machine I tested on is supposed to be high end in terms of my target audience, so I can expect people to have far below 60 fps with this approach, which is hardly acceptable for a game this simple.
What can I do to salvage my performance?
As suggested by Damon and sidewinderguy I successfully implemented a similar solution using a fragment shader (and vertex shader). My performance is little bit better than my initial cpu-run object-based computation, which is MUCH faster than my FBO-approach. At the same time it provides visual results much closer to the FBO-approach (Overlapping objects behave a bit different).
For anyone interested the fragment shader basically transforms the gl_FragCoord.xy and does a texture lookup. I am not sure this gives the best performance, but with only 1 other texture activated I do not expect performance to increase by omitting the lookup and computing the texture value directly. Also, I now no longer have a performance bottleneck, so further optimizations should wait till it is found to be required.
Also, I am very grateful for the all the help, suggestions and comments I received :-)

Loading textures in an Android OpenGL ES App

I was wondering if anyone could advise on a good pattern for loading textures in an Android Java & OpenGL ES app.
My first concern is determining how many texture names to allocate and how I can efficiently go about doing this prior to rendering my vertices.
My second concern is in loading the textures, I have to infer the texture to be loaded based on my game data. This means I'll be playing around with strings, which I understand is something I really shouldn't be doing in my GL thread.
Overall I understand what's happening when loading textures, I just want to get the best lifecycle out of it. Are there any other things I should be considering?
1) You should allocate as many texture names as you need. One for each texture you are using.
Loading a texture is a very heavy operation that stalls the rendering pipeline. So, you should never load textures inside your game loop. You should have a loading state before the application state in which you render the textures. The loading state is responsible for loading all the textures needed in the rendering. So when you need to render your geometry, you will have all the textures loaded and you don't have to worry about that anymore.
Note that after you don't need the textures anymore, you have to delete them using glDeleteTextures.
2) If you mean by infer that you need different textures for different levels or something similar, then you should process the level data in the loading state and decide which textures need to be loaded.
On the other hand, if you need to paint text (like current score), then things get more complicated in OpenGL. You will have the following options: prerender the needed text to textures (easy), implement your own bitmap font engine (harder) or use Bitmap and Canvas pair to generate textures on the fly (slow).
If you have limited set of messages to be shown during the game, then I would most probably prerender them to textures as the implementation is pretty trivial.
For the current score it is enough to have a texture which has a glyph for numbers from 0 to 9 and to use that to render arbitrary values. The implementation will be quite straightforward.
If you need longer localized texts then you need to start thinking about the generating textures on the fly. Basically you would create an bitmap into which you render text using a Canvas. Then you would upload it as a texture and render it as any other texture. After you don't need it any more, then you would delete it. This option is slow and should be avoided inside the application loop.
3) Concerning textures and to get the best out of the GPU you should keep at least the following things in your mind (these things will get a bit more advanced, and you should bother with them only after you get the application up and running and if you need to optimize the frame rate):
Minimize texture changes as it is a slow operation. Optimally you should render all the objects using the same texture in a batch. Then change the texture and render the objects
needing that and so on.
Use texture atlases to minimize the number of textures (and texture changes)
If you have lots of textures, you could need to use other bit depths than 8888 to make all your textures to fit in to the memory. Using lower bit depths may also improve performance.
This should be a comment to Lauri's answer, but i can't comment with 1 rep, and there's a thing that should be pointed out:
You should re-load textures every time your EGL context is lost (i.e. when your applications is put to background and back again). So, the correct location to (re)load them is in the method
public void onSurfaceChanged(GL10 gl, int width, int height)
of the renderer. Obviously, if you have different textures sets to be loaded based (i.e.) on the game level you're playing then when you change level you should delete the textures you're not going to use and load the new textures. Also, you have to keep track of what you have to re-load when the EGL context is lost.

Categories

Resources