I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.
Related
I am trying to draw circles (representing people) on a PNG map of Earth. Obviously, I don't want people to be floating in the oceans as the simulated civilization expands. How would I create specified areas on the map that I can draw on? Can I simply restrict the background of the PNG?
I tried using the clip() method, but it still allowed spawning within the oceans. Any suggestions (or if you recommend clip(), how would you use it in this case) would be appreciated.
You somehow have to tell your program where drawing is allowed and where not. There are a bunch of approaches to this:
Try to define a set of features on your PNG image and only spawn (or don't spawn) civilizations if these features are met.
Make a "map" out of your map. Define areas on your PNG image where spawning is allowed and where spawning is not allowed. You can use this using the Polygon class for example.
Make a second PNG image in which you make all oceans purely transparent. Then, only spawn civilizations on the original image at locations where the image with transparency is non-transparent.
Just a bunch of ideas.
For reference the effect I'm going for is this:
I'm working in Processing 3, NOT p5.js.
I've looked around processing forums, but i can't find anything that works in the current version or doesn't use PGraphics and a mask which from what I've read can be expensive to use.
My current ideas and implementations have resulted to drawing shapes around the player and filling the gaps in with a circles with no fill that has a large stroke weight.
Does anyone know of any methods to easily and inexpensively draw a black background over everything except a small circular area?
If this is the wrong place to ask this question just send me on my way I guess, but please be nice. Thank you:)
You could create an image (or PGraphics) that consists of mostly black, with a transparent circle in it. This is called image masking or alpha compositing. Doing a Google image search for "alpha composite" returns a bunch of the images I'm talking about.
Anyway, after you have the image, it's just a matter of drawing it on top of your scene wherever the player is. You might also use the PImage#mask() function. More info can be found in the reference.
I'm having a problem with my rendering cycle using libgdx, basically I need to fill an area with a square texture, and the last part of this area may be smaller or with a different shape than the texture, so it means that i need to render a quad of arbitrary form and slap the texture on it, cutting the parts I don't need.
I'm a bit lost on how to do this, so far I've seen that the PolygonRegion and PolygonSpriteBatch might do it for me, but I'm a bit wary of instancing a new heavy object I'll use only on one object.
Is there any alternative? Perhaps the Mesh class but i'd like to be certain.
I suggest using a Mesh to define exactly what region you want. Defining the vertex points and mapping those to the texture coordinates is a bit fiddly, but its good to know what's going on underneath some of the higher level APIs (like the *Batch bits). Additionally, the *Batch APIs are designed to share the weight of uploading a single texture across multiple objects, which sounds like it might not apply in this case. (On the other hand, even if the Batch objects are a bit "heavyweight", they may not actually be a problem in practice.)
Another approach to consider is to render the object as a square mesh, but to define your texture with transparent pixels for all the pixels outside the region. (I'm assuming the non-square shape is something you can know offline, and isn't dynamic.)
It isn't a big problem if you instantiate PolygonSpriteBatch for that purpose. The object mainly contains geometric data for buffered geometry. Of course you will need to care about correct rendering order calling flush or end when needed.
Mesh is another option but it can be a bit more work because you need to provide vertices and texture coordinates there manually.
From performance point of view rendering of one sprite is slightly faster with Mesh. I'm not sure if difference affects fps somehow in your case.
EDIT: forgot to mention, if you use SpriteBatch for rendering one object, don't use default constructor it reserves a lot of memory.
I'm creating a UI system for an android game that will have a large (up to 4096x4096) background area in which menus can be placed anywhere within that screen and a camera will fly to that location when a different menu is needed. Instead of having a large static image, I'd like to be able to animate this slightly. What I'd like to know is how to do this efficiently without lagging up the device. These are the methods I've come up with so far, but maybe there is something better..
1) Have 3 separate 4096x4096 static layers for the background, 1 is the sky, one is the terrain, one is things like clouds and trees. Each layer is placed on top of each other with a slight difference in Z space to give a little parallax effect when the camera moves.
2) Have a large stationary background image, with a layer on top of that with individual specific sprites of clouds, trees and other things that should be animated. I think this might be the most efficient route, as I can choose not to animate parts that are not in view, but it will also limit re-usability as every different object will have to be placed manually in space. My goal is to be able to simply change the assets and be able to have a whole new game.
3) Have 1 large background layer with several frames that plays almost like a video. I feel like this will be the worst on performance(loading several 4096x4096 frames and drawing a different one 30 times a second), but would give me the scene exactly how I want it directly out of After Effects. I doubt this one is even feasible, not just because of the drawing but storage space on android devices just for the menu UI wouldn't allow for several 6MB frames.
Are any of these in the right direction? I have seen a few similar questions asked but none fit close enough to what I needed(A large, moving background that isn't made of tiles).
Any help is appreciated.
As far as your question is tagged for Android, I would recommend the 2nd solution.
The main reason is that solution #1 and #3 involve loading numerous 4096x4096 textures.
Quick calcultation: three 32bit textures with such resolution would use at least 200MB of Video RAM. It means that you can immediatly discard a lot of android devices.
On the other hand, the solution #2 would involve only two big textures: a large stationary background image, and a texture atlas containing specific sprites of clouds, trees...
This solution is really more memory friendly, and will lead to the same aestetic output.
TL;DR: the 3 solutions would work great but only the #2 would fit an embedded device
I'm working on small project which requires: Change clothes (shirt/pants etc.) of a person in any 2D image he chooses to upload. So somehow edges needs to be detected and relevant areas are supposed to be filled with new patterns. I do see a lot of other complications, but let's assume simple patterns have to be filled only.
For a web application, is it possible to do it in HTML5? Any other alternatives?
For a standalone application, what kind of technology would be preferred, C++/Java?
Update
Based on Bart's comment:
Any useful pointer like Bart's would be really useful
Assumption: Clear traceable 'standing' human figure in 2d image
Since it's an image, there is no real-time scenario
Assumption: Clear traceable 'standing' human figure in 2d image
A way to do this is to require the user to take two pictures. One picture is the one with the user in it, the other picture must be taken in the same camera position and orientation, but the user steps out of the frame for that one.
Since both pictures will have the same background you can compare pixel by pixel between the two images and flag those pixels that have a difference over some threshold. Of course the threshold must be selected so that camera noise isn't detected as a difference. Once you have the collection of pixels that are different you can filter them and calculate an approximate silhouette for the user from the pixels on the edge.
A simplification of the above method can be done if you have control over the background. You could use a bluescreen to avoid having to have a second picture with the background.