So basically this is my problem:
I am creating a game that needs a texture for each object. Now I would use a sprite sheet but the textures are different sizes. I am using VBOs and I need to some how bind the correct textures when calling drawElements. Now I don't know what to do because I don't want to separate each class to its own VBO because that will just make it like 100+ VBO per level which isn't so efficient (or maybe it is?)
Note this is a 2D game but I still want to make it efficient.
Maybe there is some thing I can do with shaders? I am using shaders...
So that is my question:
What Do I Do?
Things I came up with:
Separate the classes to different VBOs (Easy but I am not sure if very efficient)
Use sprite sheets but have a really really big cellsize then just draw big quads with transparent backgrounds.. (Seems like a stupid idea :P)
Thats it.. So I hope you have ideas!
EDIT: I read some where its possible to have an attribute of which texture to use, pass it in as an element and then the fragment shaders use it.. If this is true I'd love if some one could describe it in more detail and add some examples.. (Also if you need to customize the fragment shader please tell me how because I dont know how to write shaders)
Solved by just separating the classes to different VBOs..
Related
I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.
I have been trying to solve this problem for two days and I have given up trying to find an existing solution.
I have started learning libgdx and finished a couple of tutorials. And now I have tried to use all that I have learned and create a simple side scrolling game. Now, I know that there are libgdx examples of this, but I haven't found a one that incorporates Box2d with scene2d and actors as well as tiled maps.
My main problem is with the cameras.
You need a camera for the Stage (which as far as I know is used for the projection matrix of the SpriteBatch passed to the method draw() at actors, if this is wrong please correct me) and you need a camera for the TileMapRender for calling the render() method. Also, in some of the tutorials there is a OrthographicCamera in the GameScreen, which is used where needed.
I have tried to pass a OrthographicCamera object to methods, I have tried to use the camera from the Stage and the camera from the TileMapRenderer everywhere.
Ex.
OrthographicCamera ocam = new OrthographicCamera(FRUSTUM_WIDTH, FRUSTUM_HEIGHT);
stage.setCamera(ocam); // In the other cases i replace ocam with stage.getCamera() or the one i use for the tileMap Render
tileMapRenderer.render(ocam);
stage.getSpriteBatch().setProjectionMatrix(ocam.combined); // I am not sure if this is needed
I have also tried to use different cameras everywhere.
After trying all of this I haven't noted what happens exactly when but I will list what happens :
There is nothing on the screen ( Probably the camera is away from the stuff that is drawn )
I can see the tiled map and the contours from the debugRenderer (I use debugRender too but I don't think that it interferes with the cameras), but the sprite of the actor is not visible ( probably off screen )
I can see everything that I should but when I try to move the Actor and the Camera, which is supposed to follow him, the sprite goes faster than the body ( the green debug square ).
So my main questions are :
I don't understand what happens when you have multiple cameras. "Through" which one do you actually see on the montior?
Should I use multiple cameras and how ?
Also, I thought that I should mention that I am using OpenGL ES 2.0.
I am sorry for the long question, but I thought that I should describe in detail, since it's a bit complicated for me.
You actually see through all of them at the same time. They might look at a completely different world though, but all of them render their point of view to the screen.
You can use several cameras, or just one. If you use only one you need to make sure that you update the projection matrix correctly, between drawing the TiledMap, your Stage with Actors and maybe for the optional Box2DDebugRenderer.
I'd use an extra Camera for the Box2DDebugRenderer, because you can easily throw it away later. I assume you use a conversion factor to convert meters to pixels and the other way around. Having a 1:1 ratio wouldnt be very good. I always used something between 1m=16px and 1m=128px.
So you initialize it this way, and use that one for your debugging renderer:
OrthographicCamera physicsDebugCam = new OrthographicCamera(Gdx.graphics.getWidth() / Constants.PIXEL_PER_METER, Gdx.graphics.getHeight() / Constants.PIXEL_PER_METER);
For your TiledMapRenderer you may use an extra camera as well, but that one will work in screen-coordinates only, so no conversion:
OrthographicCamera tiledMapCam = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
The TiledMap will always be rendered at (0, 0). So you need to use the camera to move around on the map. It will probably follow a body, so you can update it via:
tiledMapCam.position.set(body.getPosition().x * Constants.PIXELS_PER_METER, body.getPosition().y * Constants.PIXELS_PER_METER)
Or in case it follows an Actor:
tiledMapCam.position.set(actor.getX(), actor.getY())
I actually haven't used scene2d together with Box2D yet, because I didn't need to interact very much with my game objects. You need to implement a custom PhysicsActor here, which extends Actor and builds the bridge from scene2d to Box2D by having a body as a property. It will have to set the Actors position, rotation etc based on the Body at every update-step. But here you have several options. You may re-use the tiledMapCam and work in screen-coordinates. In this case you need to always remember to multiply with Constants.PIXELS_PER_METER when you update your actor. Or you will use another cam with the same viewport like the physicsDebugCam. In this case no conversion is needed, but I'm not sure if this might interfere with some scene2d-specific things.
For a ParallaxBackground you may use another camera as well, for UI you can use another stage and another camera again... or reuse others by resetting them correctly. It's your choice but I think several cameras do not influence performance much. Less resetting and conversions might even improve it.
After everything is setup, you just need to render everything, using the correct cameras and render every "layer"/"view" on top of each other. First a ParallaxBackground, then your Tiledmap, then your Entity-Stage, then your Box2DDebugging view, then your UI-stage.
In general remember to call spriteBatch.setProjectionMatrix(cam.combined); and using cam.update() after you changed anything of your camera.
I'm having a problem with my rendering cycle using libgdx, basically I need to fill an area with a square texture, and the last part of this area may be smaller or with a different shape than the texture, so it means that i need to render a quad of arbitrary form and slap the texture on it, cutting the parts I don't need.
I'm a bit lost on how to do this, so far I've seen that the PolygonRegion and PolygonSpriteBatch might do it for me, but I'm a bit wary of instancing a new heavy object I'll use only on one object.
Is there any alternative? Perhaps the Mesh class but i'd like to be certain.
I suggest using a Mesh to define exactly what region you want. Defining the vertex points and mapping those to the texture coordinates is a bit fiddly, but its good to know what's going on underneath some of the higher level APIs (like the *Batch bits). Additionally, the *Batch APIs are designed to share the weight of uploading a single texture across multiple objects, which sounds like it might not apply in this case. (On the other hand, even if the Batch objects are a bit "heavyweight", they may not actually be a problem in practice.)
Another approach to consider is to render the object as a square mesh, but to define your texture with transparent pixels for all the pixels outside the region. (I'm assuming the non-square shape is something you can know offline, and isn't dynamic.)
It isn't a big problem if you instantiate PolygonSpriteBatch for that purpose. The object mainly contains geometric data for buffered geometry. Of course you will need to care about correct rendering order calling flush or end when needed.
Mesh is another option but it can be a bit more work because you need to provide vertices and texture coordinates there manually.
From performance point of view rendering of one sprite is slightly faster with Mesh. I'm not sure if difference affects fps somehow in your case.
EDIT: forgot to mention, if you use SpriteBatch for rendering one object, don't use default constructor it reserves a lot of memory.
What is the best way to edit singular pixels of an texture multiple times per frame? I have tried using a couple ways to no avail. What is the most optimal way to do this? I have tried using Intermediate mode and drawing each quad, thought, this is really slow.
Edit:
I forgot to mention that I am doing an unusual "fog of war" system. This system doesn't let you see around walls but instead acts like a 2D ray traced shadow. I want these shadows to be pixelated as this is part of the style of the game. I am trying to find the best way to do a form of a shadow map that I can overlay over the world to show what you can see.
I recommend using a Pixel Buffer Object.
I'm new to opengl-es and I wonder how people are able to draw these much detailed OpenGL ES graphics, e.g. on Android OS. It's already hard to draw a single squre, because it's composed of triangles due to the reason that OpenGL ES obviously cannot draw anything else than triangles.
I thought about this approach:
Drawing and rendering an object in Blender.
Export it somehow as array of vertices and an array of colors
Copy this array of vertices into the Java code
Run the code
Or are there approaches to solve such problems in a better way? I do not think that people just "draw" their graphics as array of vertices in the code. I'm sure they draw them anywhere else and import it into the code.
If there is such a solution with Blender, I would be pleased to know how this is solved.
Regards.
You can save your model from Blender e.g. as Wavefront OBJ file. In Blender, you can also select to triangulate the model for you, which will result in a list of just triangles, ready for drawing.
OBJ is a very simple format, it just lists in ASCII the position of each vertex and then which vertices belong to each triangle. Either convert the OBJ to a format specific to your application (e.g. binary buffer you can directly load to OpenGL) or find yourself one of the many OBJ loaders, or write your own reader.
There are a lot of other common formats available, you should take a look at them and see if there is an importer for Android available.
What I've found is that you can Load 3D models with min3D directly to your program using:
min3d
I hope this helps.