I'm programming an Android App where a grid is drawn, which you can move around and move in it's direction. The grid consists of about 2000 to 5000 quads, each with a different texture. I defined 4 vertices and use an index buffer to draw each quad. Before drawing I position it using a model matrix. As you can move in my scene I use view frustum culling, which increases the performance in some situations. Unfortunately there might be the case where I will need to draw all of the quads, so I want to ask how I prevent slow drawing.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024). I think calling glDrawElements() for each squad is what makes me slow, but I don't know how I can change it.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
I look forward for any kind of help.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024).
You can fit 64 256x256 textures into a 2048x2048 atlas, that's a huge amount, so you should definitely atlas. Even getting 4 1024x1024 onto a 2048x2048 is worth doing, it can quarter your draw call count.
And as WLGfx says in the comments to your question, you should batch up any quads that use the same texture (with atlasing there will be a lot more of these).
I think this would be enough, but you still might have a pretty high drawcall count in your fully zoomed-out view. After implementing atlasing and batching, if performance here is still a problem, you could create a separate asset set of thumb-nail textures at, say, quarter resolution (so a 256x256 becomes 64x64). This thumbnail asset set would fit onto just a handful of 2048x2048 atlas sheets, and you could switch to it when zoomed out far enough.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
This could work, as long as your scene is very static, if the quads are moving/changing every frame, then it might not help. Also, there might be a noticeable framerate hitch when you have to do the full redraw.
Related
If I draw a texture using SpriteBatch that is not visible to the camera or viewport, does it still render and use my GPU?
a little something like:
batch.draw(img, 9999999f, 9999999f, 1f, 1f)
or do I have to check if it's out of frame and not draw it in the first place?
It does draw it. So the vertex shader will run for the four vertices of the texture or texture region. Since the vertex shader projects it to somewhere outside the visible space, the fragment shader will not be run.
For each sprite drawn, the vertex shader program is run four times regardless of whether the sprite is visible and the fragment shader program is run roughly once for every pixel of the sprite that appears on screen. A modern low end phone can easily handle hundreds of “wasted” sprites being drawn off-screen.
It’s up to you to decide whether it is worth calculating if it will be visible and skip drawing it. If you do this for each individual sprite, it will be costly on the CPU for a comparatively small savings of GPU. If you think about it, checking an individual sprite based on whether any of its four corners is visible is running your own copy of the vertex shader program on the CPU redundantly just in case it saves you from having to repeat that program on the GPU. And the GPU is far more optimized for this kind of thing. (If you're using orthographic projection, the CPU's version can at least be simpler than the GPU version because it becomes a simple 2D comparison.)
For that reason, if you’re going to check them first, you would check groups of them at a time. So you might organize your game world into sections and check the outer bounds of a whole section before deciding to draw all or none of that section's sprites. For it to be worthwhile, each section should be big enough to encompass a least a couple hundred sprites.
As far as I know, you can get the screen size and density using Gdx.graphics.getDensity(), so you can load the right texture for E.g 1x, 1.5x etc..
but what about the texture that comes with the 3D model, for E.g. the texture is only intended for a maximum 1280x800px, while my android has dpi 3x.
I don't want to scale it too much because it can cause the image become too blur/fade/not sharp, anyone who knows the solution please?
EDIT:
let me explain in detail
I've one ModelInstance, texture atlas (2048x2048px) attached.
When the games is opened in 4k screen, I widen the scale of the model almost three times, causing the texture to become blurry, that makes sense because from 240dpi to 640dpi the difference is very far.
so in my opinion the solution is to make some textures atlas for 240dpi, 320dpi, 480dpi etc. the problem is I don't know how to replace the texture atlas which from the beginning already integrated with the Model? so when scaling up, texture atlas is automatically replaced with a higher one. thanks
Usually in 3D graphics the camera or model are mobile. There isn't a fixed best resolution of a texture because the camera may be very near, very far away or viewing the textured surface at a glancing angle.
The solution offered by graphics APIs are settings for texture filtering. Under magnification (where a texel takes up more than a screen pixel) you can do linear filtering for soft edges or point filtering for hard edges. Minification is more complex, you can have linear or point filtering, but you can also have mipmaps, which are a precalculated chain of successively half-sized versions of your image, typically all the way down to 1x1. You can set texture filtering to pick the nearest mipmap, blend between mipmaps, or use anisotropic filtering for better sharpness at glancing angles. Generally linear filtering for magnification, and full mipmap chains with anisotropic filtering for minification produces very good quality and good enough performance to be a good default choice.
So, you won't be giving the GPU a single texture for your model, you'll be giving it a chain of textures, and letting the GPU worry about how to sample that chain to give the correct amount of blur/sharpness. For performance and compatibility with mipmaps, it is usually a good idea to use power-of-two textures (e.g. 1024x1024 rather than 1280x800).
So, just make a 1024x1024 or 2048x2048 texture with mipmaps and appropriate filtering settings then use it on every device regardless of its resolution then quality-wise you're sorted.
If you're particularly worried about memory use or load times then there's an argument to reduce the texture size on lower resolution devices (basically, have a second asset with halved resolution for low-res devices, or just skip the highest resolution mip when loading for low-res devices), but I think that might be a premature optimization at this stage.
however, i have a weird issue, when drawing, it seems the outside 1px of an image is stretched to fit a rectangle, but the inside is only stetched to an extend, i was drawing to 48x48 tiles, but drew a 500x500 tile to show the issue. [ 500x500 draws fine ]
the worst part seems to be, it chooses when to stretch and not to stretch. and also what to strech. im sorry this is hard to explain but i have attached a image that i hope does a better job.
it could just be misunderstanding how to use a draw with spritebatch
edit: Tile is 48x48 not 64x64, ive just been working all day.
This is because you are not rendering "pixel perfect" which means your image does not line up with the pixel grid of your monitor. A quick fix might be to set a linear filter for your textures, since by default it uses nearest and thus a pixel on the screen will inherit the closest color it can get. A linear filter will interpolate colors and make that line "look" thinner.
texture.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
If you are using texturepacker you can do this in one go by altering it's settings.
texturePackerSetting.filterMin = Texture.TextureFilter.Linear;
texturePackerSetting.filterMag = Texture.TextureFilter.Linear;
Or you could edit the atlas file itself by by changing the filter parameter to:
filter: Linear,Linear
This obviously costs more power since it needs to do more calculations for each pixel you drawn to the screen but I would not worry about this until your drawing is starting to get a bottleneck.
Another solutions is to draw pixel perfect which means you need to set your viewport to the size of the device gdx.graphics.getWidth, gdx.graphics.getHeight, in other words a ScreenViewport and draw your textures at exact sizes you want them. Of course this means a screen with more pixels sees more of your game world then a screen with less pixels and the more pixels a device has the smaller your textures will look. Another drawback of this is that you have to forget about any zooming or draw sprites for each level of zoom so they line up with the pixel grid of the device again.
I've made an Asteroids game and am having trouble with different screen sizes. Yes, I know about viewports but in my case I don't think that can work. I've tried to use it but my objects are not images. Instead they're rendered by a ShapeRenderer and therefore the viewports probably not affect them I guess. I've solved how to fit the text for most types of smartphone screens but not the rendered objects like the player and asteroids. On some screens they're perfect size and other screens either too small or too big.
Is there a way to scale rendered (ShapeRenderer) types in libGDX so that they will fit the screen size / resolution? Or must I try to implement a class for common resolutions that can be used?
Here is two pictures that illustrates the problem. The text in the images are aldready fixed by the way.
EDIT:
It does work to create and apply the viewport for the ShapeRenderer now, but I'm still having the same problem. The game objects inside the viewport aren't scaled to fit the current screen. I need a way to scale down the game objects s using a viewport (if possible). The way the game is going to be implemented is that the size of the smartphone screen is the game world size. Therefore, there are little space to move the ship if the screen is pretty small. I want the objects to be moving around in an area that fits the viewport size. The game should not get more difficult or easier depending on the screen size.
Example on left: The game on a large smartphone screen
Example on right: The ShapeRenderer viewport is working
Viewports were created in order to reduce the effort needed to render on different sized screens. So are ideal for situations like this.
In order to use a Viewport with a ShapeRenerer you simply need to set the ShapeRenderer's projection matrix to the one in the viewport's camera.
shapeRenderer.setProjectionMatrix(veiwport.getCamera().combined);
This means the shapeRenderer will render the way the viewport is configured.
I am about to create a little 2D desktop game with libGdx and I want it to have that retro pixelated look you know from games like "Flappy Bird". To achieve that effect, I thought of the following:
Create a game window (e.g. 640x480)
Create a framebuffer half that size (i.e. 320x200)
Render everything to the framebuffer
Get the texture from the framebuffer
Draw the texture to the screen with SpriteBatch, scaling it 2 times up and using TextureFilter.Nearest.
I know I could scale each sprite individually with SpriteBatch.draw() but I thought, rendering everything at its original resolution and just scale up the final composition might be easier.
So would the above technique be an appropriate way of getting that pixelated look?
What you have in mind sounds like a perfectly fine approach. The downside is that it does involve an additional data copy, but on the other hand your original rendering is for only 1/4 of the pixels, which saves you quite a bit of rendering overhead.
In plain OpenGL, you could use glBlitFramebuffer() for step 5. This requires OpenGL 3.0 or higher. It's essentially the same operation as drawing a textured quad, but it's a single call, and the underlying implementation could potentially be more efficient.