Ideal number of spritesheets for LWJGL/OpenGL - java

I am making a game using LWJGL, and so far I decided to have 5 or 6 sprite sheets. It is like one for the blocks, one for the items, objects, etc. By doing that, I have sprite sheets with sprites that are closely related, and normally have the same size (in the case of blocks and items). But is this the better way? Or should I just throw everything on a single sprite sheet, with no organization whatsoever?
If I am doing it the right way, there is also another problem. For example, when you are in the map, I need to draw the blocks. But over the blocks, there can be electrical wires and other stuff - that are in a separated sprite sheet. This information, however, is stored in the same array. So normally I just iterate over it once, and each time, draw the block, and then the wire over it - switching sprite sheets twice each iteration. But I thought it might take some time to switch these, so maybe it would be more interesting to run the thing twice, first draw all the blocks, and then iterate again to draw the wires? To change the textures, I am using the SlickUtil Texture class, which has a bind method - really easy to use.

There is no "ideal"; there are simply the factors that matter for your needs.
Remember the reason why you use sprite sheets at all: because switching textures is too expensive to do per-object when dealing with 2D rendering. So as long as you're not switching textures for each sprite you render, you'll already be ahead of the game performance-wise.
The other considerations you need to take into account are:
Minimum user hardware specifications. Specifically, what is the minimum size of GL_MAX_TEXTURE_SIZE you want your code to work on? The larger your sprite sheets get, the greater your hardware requirements, since a single sprite sheet must be a texture.
This value is hardware-dependent, but there are some general requirements. OpenGL 3.3 requires 1024 at a minimum; pretty much every piece of GL 3.3 hardware gives 4096. OpenGL 4.3 requires a massive 16384, which is approaching the theoretical limits of floating-point texture coordinate capacity (assuming you want at least 8 bits of sub-pixel texture coordinate precision).
GL 2.1 has a minimum requirement of 64, but any actual 2.1 hardware people will have will offer between 512 and 2048. So pick your sprite sheet size based on this.
What you're rendering. You want to be able to render as much as possible from one sprite sheet. What you want to avoid is frequent texture switches. If your world is divided into layers, if you can fit your sprites for each layer onto their own sheet, you're doing fine. No hardware is going to choke on 20 texture changes; it's tens of thousands that are the problem.
The main thing is to render everything that uses a sheet all at once. Not necessarily in the same render call; you can switch meshes and shader uniforms/fixed-function state. But you shouldn't switch sheets between these renders until you've rendered everything needed for that sheet.

Related

Should TextureRegions in an atlas be power of 2?

It is my understanding that images should be power of 2 for optimization with the GPU. However, if I'm packing my textures into a sheet for libGDX, can the atlas be a power of 2 and NOT the actual regions? Or should the atlas sheet be a power of 2 and each Texture Region also be a power of 2.
I understand that it's not. As far as I know the reason for the need of POT (power of two) textures is for optimizations things like mipmapping, anisotropic filtering, etc. So when you pack many images in one texture then you upload that entire texture to the GPU, so the GPU can perform the desired optimizations for all the texture. The atlas is like an index to lookup for certain sectors (or "pieces") of your texture (texture regions), so you can retrieve the desired regions easily. Those regions don't need to be POW, because the GPU already perform optimizations for the entire set (the entire texture). I think that you should ask these type of questions in the game dev community, so you will have a deeper response.

Is changing SpriteBatch projection matrix inside begin...end a good idea?

I have a list of game objects that I want to render in libgdx 2D.
In my setup I have two different cameras, one that matches the pixel resolution exactly (for drawing tiled background and fonts) and one that matches the gameboard with a typical viewport of (-0.1, -0.1, 1.2, 1.5).
Now I would like each game object in the list to decide for themselves which camera to use when rendering, which means that the projection matrices for the SpriteBatch may change several times inside the SpriteBatch.begin()...end().
One other solution would be to have two SpriteBatches active at the same time and make the game objects select which spritebatch to render to, and then call End() on them after rendering is finished. Some other questions and posts I've found on the Internets suggest that multiple active SpriteBatches is not a good idea.
What are the pros and cons of these solutions? Is there a better one?
Design wise, thats awful!
I usually have 2 cameras aswell. One for the game objects with low values (12x8 or 15x10), and one for the HUD (attack button, stats, etc) with high values (pixel perfect of my test device). I first draw the game objects, finish the batch, and then I draw the HUD. Just 2 draws. But:
SpriteBatch#setProjectionMatrix
"If this is called inside a Batch.begin()/Batch.end() block, the current batch is flushed to the gpu."
If you set the projection matrix several times inside your render code, that will trigger a lot of batch flushes and negatively (and greatly) affect your performance.

Java Tile Game Zooming

I am building a 2D top-down tile based game in Java. Naturally you can pan around and zoom in on the game, currently zooming in on 10 different levels, where each tile ranges 10x10 pixels to 100x100 pixels appropriately. Currently, the the tiles for each zoom level are stored in separate sprite sheets, read in at the startup of the program and stored in a buffered image array. I am sure this can't be the best way to go about this.
I am looking for any tips to enhance efficiency for the long-term, would it be better to have the 100x100 tiles only and scale them dynamically in java; somehow use vector graphics in java (I'm sure how, but I'm sure google could help me) or what?
Many thanks!
I'd go dynamic.
Normally in computer graphics you use matrices that, applied to the graphics context, modify everything you draw on it.
This is used to modify position, scale, rotation, etc. Rather than subtract the camera position to every tile, you apply the translation once to the graphics context, and then you draw your tiles in world position. The graphics context will take care of placing the tiles in the correct screen space.
I suggest you the following reads:
http://docs.oracle.com/javase/tutorial/2d/advanced/transforming.html
http://www.javalobby.org/java/forums/t19387.html
If you're doing fixed zooming (i.e. each zoom level is a fixed distance from the camer), as opposed to fluid zooming (the player can zoom in by 3.3x, 7.5x, and not just 1x, 2x, 3x, etc.) then it's massively wasteful to try to solve this by simply applying a zoom transform. It's tempting because that's the least complicated approach, and it's easy to understand from an implementation standpoint, but that means that at maximum zoom-out, you're going to be rendering an area that's 10x larger in the X direction, and 10x larger in the Y direction - so the area of the world that you have to render is 100x larger than at maximum zoom-in. I also doubt that you'll like the way your textures get squished by the hardware as you're zooming out. Computer graphics isn't the same as optics - subpixel rendering, and other things that happen in computer graphics aren't going to make your textures look very good if you hand that task off the the software/hardware.
Even if you do fluid zooming, I would still do level-of-detail textures, and dynamically swap them out depending on the distance between the world being rendered, and the camera.
Also, 10 zoom levels? Are you sure you really need 10 zoom levels? Zoom is usually used in 2D games to allow you to perform different activities at different levels of detail because a particular zoom level is especially well suited for a certain set of activities. I don't remember any 2D game that needed 10 zoom levels to accomplish this. 3-5 is the most I've ever seen, and I've never felt that it wasn't enough. It also seems like a lot of art work to produce the images at every zoom level for 10 zoom levels.
You're also likely going to find that applying an AffineTransform sounds like a good idea, but that it's extremely computationally expensive, and if you need 60fps performance, you're highly unlikely to achieve it this way. Don't take my word for it though, go try it and see how badly it falls over on itself.

drawCircle vs drawBitmap

I'm planning on implementing a new set of figures in my game: plain circles. The number of drawn sprites (in this case circles) starts with 2-3, and can go up endlessly (potentially). The maximum will probably be around 60 though. In total there will have to be 5 types of circles, each with a different color and probably size too. Now seeing as I won't implement it until monday I thought I'd ask it at stackoverflow.
Does anybody already know which method is faster?
Bitmaps are almost always faster than any kind of draw. With the right preparation drawing a bitmap is simply dumping memory to the screen. Drawing a circle involves a significant number of calculations, including anti-aliasing. I presented a paper which covered this at JavaOne 2009, but papers that old seem to have been removed from the site.
It does depend on how big your bitmap would need to be, but for sizes under 10 pixels bitmap sprites are much faster than even simple graphic operations like drawing crosses and lines. You also need to make sure that your sprite won't require any kind of transform when it is drawn, and that it is a form compatible with the screen memory.
If every circle is to be a different color or thickness, or worse a different size, then that's another matter. The cost of creating each bitmap would outweigh the savings.
You should also remember the first rule of optimization: don't do it unless you have to.

OpenGL - Pixel color at specific depth

I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.

Categories

Resources