How can I add a Continuous LOD for a single object? - java

I have got a single object.
A heightmap.
(Ignore the flag and the water - We have imaginations, right? ;) )
However, the issue is that I display this as a single display list. Therefore, I cannot "check the distance" of the map from the player, nor make the map less detailed, because I am only able to treat the map as a single object.
I have tried using shaders, however these are too late in the pipeline to be able to affect performance (If I use a shader to cut out EVERYTHING in the entire game, the game still lags as if it has everything else).
So, how can I add a Continuous Level Of Detail to the terrain, before it is too late, without splitting it into a ton of different objects (And even that wouldn't work well)?

You can split your map up into squares that you can display independently and only create those mesh objects when the player comes close enough to potentially render, and only render when inside the sight of the player.
besides that you can use a tesselation shader to create the continuous level of detail. it involves drawing flat quads and using the control shader to say how many vertices must be drawn and the evaluation shader to displace them upwards based on the height map (that you pass in as a texture).
or to be radical you can create a flat mesh that is fine grained in the center and decreases in detail further out, then using the vertex shader you can displace the vertices with the height map, the center remains under the camera but you use the position of the camera to offset the sampled coordinates of the height map (and texture map)

Related

Rendering very big 2D Map

I want to create a 2D Game with Java and LWJGL. It is a retro styled RPG game. So there is a really big map(about 1000x1000 or bigger). I want to do it with tiles but I don't know how to save it/how to render it.
I thought at something like a 2D-Array with numbers in it and the render just sets the right tile at the right place.
But i think the bigger the map gets the more it will slow down.
I hope you can help me. :)
My second suggestion was to make a big image and just pick a part of it(the part where the player is) but than its hard to know where I have to do a collision detection, so this ist just an absurd idea.
Thank you for your suggestions!
As one of the comments mentioned, this subject is far too large to be easily covered with a single answer. But I will give you some advice from personal experience.
As far as saving the map in a 2D array, as long as the map is fairly simple in nature there is no problem. I have created similar style maps (tiled) using 2D integer arrays to represent the map. Then have a drawing method to render the map to an image which I can display. I use multiple layers so I just render each layer of the map separately. Mind you most of my maps are 100x100 or smaller.
I would recommend for such large maps to use some sort of buffer. For example, render only the playable screen plus a slight offset area outside of the map. E.g. if your screen if effectively 30x20 tiles, render 35x25, and just change what is rendered based on current location. One way that you could do this would be to load the map in "chunks". Basically have your map automatically break the map into 50x50 chunks, and only render a chunk if you get close enough that it might be used.
I also recommend having the drawing methods run in their own thread outside of the main game methods. This way you constantly draw the map, without having random blinking or delays.
I'm maintaining my 400*400 tiles map in the Tiled map editor and render it with the Slick2D framework. It provides support for rendering only visible subsections of the map. (TiledMap class).
I've tried both approaches - Image based and tiled based map creation and ended up with the latter. With tiles you can not only create the view of your map but also invisible meta data layers, like collision, spawn spots, item locations etc.

Confused with image scaling and positioning in libgdx

I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.

Trying to achieve dynamic lighting in a tiled 2D isometric environment using Java2D

I am trying write some lighting code for a Java2D isometric game I am writing - I have found a few algorithms I want to try implementing - one of which I found here:
here
The problem is that this sort of algorithm would require some optimal pixel-shading effect that I haven't found a way of achieving via Java2D. Preferably some method via the graphics hardware but if that isn't possible - at least a method of achieving the same effect quickly in software.
If that isn't possible, could someone direct me to a more optimal algorithm with Java2D in mind? I have considered per-tile lighting - however I find the drawPolygon method isn't hardware accelerated and thus performs very slowly.
I want to try and avoid native dependencies or the requirement for elevated permissions in an applet.
Thanks
I did a lot of research since I posted this question - there are tons of alternatives and JavaFX does intend (on a later release) to include its own shader language for those interested. There is also a ofcourse LWJGL that will allow you to load your own shaders onto the GPU.
However, if you're stuck in Java2D (as I am) it is still possible to implement lighting in an isometric game it is just 'awkward' because you cannot perform the light shading on a per-pixel level.
How it Looks:
I have achieved a (highly unpolished - after some polishing I can assure you it will look great) effect for casting shadows, depth sorting the light map, and applying the lighting without experiencing a drop in frame-rate. Here is how it looks:
You'll see in this screen-shot a diffuse light (not shaded in but that step I'd say is relatively easy in contrast to the steps to get there) casting shadows - the areas behind the entities that obstructs the light's passage BUT also in the bounds of the light's maximum fall-out is shaded in as the ambient lighting but in reality this area is passed to the lights rendering routine to factor in the amount of obstruction that has occurred so that the light can apply a prettier gradient (or fading effect of some sort.)
The current implementation of the diffuse lighting is to just simply render obstructed regions the ambient colour and render non-obstructed regions the light's colour - obviously though you'd apply a fading effect as you got further from the light (that part of the implementation I haven't done yet - but as I said it is relatively easy.)
How I did it:
I don't guarantee this is the most optimal method, but for those interested:
Essentially, this effect is achieved by using a lot of Java shape operations - the rendering of the light map is accelerated by using a VolatileImage.
When the light map is being generated, the render routine does the following:
Creates an Area object that contains a Rectangle that covers the
entirety of the screen. This area will contain your ambient
lighting.
It then iterates through the lights asking them what their
light-casting Area would be if there were no obstructions in the way.
It takes this area object and searches the world for Actors\Tiles
that are contained within that area that the light would be cast in.
For every tile that it finds that obstructs view in the light's casting area, it will calculate the difference in the light source's position and the obstruction's
position (essentially creating a vector that points AT the
obstruction from the light source - this is the direction you want to cast your shadow) This pointing vector (in world
space) needs to be translated to screen space.
Once that has been done, a perpendicular to that vector is taken and
normalized. This essentially gives you a line you can travel up or
down on by multiplying it by any given length to travel the given direction in. This vector is
perpendicular to the direction you want to cast your shadow over.
Almost done, you consturct a polygon that consists of four points.
The first two points are at the the base of the screen coordinate of
your obstruction's center point. To get the first point, you want to
travel up your perpendicular vector (calculated in 5) a quantity of
half your tile's height [ this is a relatively accurate
approximation though I think this part of the algorithm is slightly
incorrect - but it has no noticable decay on the visual effect] -
then ofcourse add to that the obstructions origin. To get the
second, you do the same but instead travel down.
The remainder of the two points are calculated exactly the same way -
only these points need to be projected outward in the direction of
your shadow's projection vector calculated in 4. - You can choose any large amount to project it outwards by - just as long as it reaches at least outside of you light's casting area (so if you just want to do it stupidly multiply your shadow projection vector by a factor of 10 and you should be safe)
From this polygon you just constructed, construct an area, and then
invoke the "intersect" method with your light's area as the first
argument - this will assure that your shadows area doesn't reach
outside of the bounds of the area that your light casts over.
Subtract from your light's casting the shadow area you constructed
above. At this point you now have two areas - the area where the
light casts unobstructed, and the area the light casts over
obstructed - if your Actors have a visibility obstruction factor
that you used to determine that a particular actor was obstructing
view - you also have the grade at which it obstructs the view that
you can apply later when you are drawing in the light effect (this will allow you to chose between a darker\brighter shade depending on how much light is being obstructed
Subtract from your ambient light area you constructed in (1) both
the light area, and the obstructed light area so you don't apply
the ambient light to areas where the lighting effect will take over
and render into
Now you need to merge your light map with your depth-buffered world's render routine
Now that you've rendered you're light map and it is contained inside of a volatile image, you need to throw it into your world's render routine and depth-sorting algorithm. Since the back-buffer and the light map are both volatileimages, rendering the light map over the world is relatively optimal.
You need to construct a polygon that is essentially a strip that contains what a vertical strip of your world tiles would be rendered into (look at my screen shot, you'll see an array of thin diagonal lines seperating these strips. These strips are what I am referring). You can than render parts of this light map strip by strip (render it over the strip after you've rendered the last tile in that strip since - obviously - the light map has to be applied over the map). You can use the same image-map just use that strip as a clip for Graphics - you will need to translate that strip polygon down per render of a strip.
Anyway, like I said I don't guarantee this is the most optimal way - but so far it is working fine for me.
The light map is applied p

OpenGL 2.0 ES how does a matrix stack work?

I am having some performance problems with OpenGL. I essentially want to create a grid of squares. I first tried to implement it where each square I would translate to where I want a square, then multiply the model and view matrix, pass it into the shader program and draw the square. I would do this for each square. After creating about 50 squares the frame rate would start to drop to less than what I desire.
I then tried a VBO method where I basically would generate a vertex buffer each time the squares change location. Frame rate increased dramatically with this approach, but I have too much latency when something changes because it has to regenerate all the vertex locations.
What I think I need is a matrix stack... I used opengl 1.1 before and would use push/pop. I don't really understand the concepts of what that was doing though and how to reproduce it. Does anyone know where a good example of a matrix stack is that I can use as an example? Or possibly just a good explanation for one?
You can check this tutorial, is basically doing the same you want to achieve, but with cubes instead of squares. It uses a VBO as well:
http://www.learnopengles.com/android-lesson-seven-an-introduction-to-vertex-buffer-objects-vbos/
About the matrices, in OpenGL ES 2.0 you don't have any matrix related functions anymore, but you can use the glmath library, which does the same (and much more):
http://glm.g-truc.net/
It's a header library, so you just need to copy it somewhere and include it where you need it.
I'm not sure if I completely understand your objective, but I guess you could copy the data of one square in the grapic card (using a VBO) and then repeatedly update the model matrix for every square.
The concept of a matrix stack makes sense if your squares have some kind of hierarchy between them (for instance, if one of them moves, the one to its left has to move accordingly).
You can imagine it as a skeleton made out of squares. If the shoulder moves, all the pieces in the arm will move as well (hands, fingers, and so on).
You can emulate that by using a matrix stack. You can create some kind of tree with all your squares, so that every square has a list of "descendants", which will apply the same transformation as the parent. then you can render recursively all the squares like that:
Apply transform to the root square(s)
Push the transform in a queue
Call the same render function for every child
Every child reads the matrix on the top of the queue, multiplies
it by its own transformation, push the new matrix on the queue and
calls the children
After that every child pops out the matrix they pushed before
Using the glmath is quite easy, you just need to create a queue (std:vector in this case) of matrices:
std::vector<glm::mat4> matrixStack;
And then for every child:
glm::mat4 modelMatrix = matrixStack.back();
glm::mat4 nodeTransform = /*apply your transform here*/
glm::mat4 new = modelMatrix * nodeTransform;
matrixStack.push_back(new);
/*Pass in the new matrix to the shader and call to glDrawArrays or whatever to render your square*/
for (every child) {
render();
}
matrixStack.pop_back();
For the drawing part, I guess you could bind the vertex array with the square vertices, and then update the model matrix in the shader for every child, before calling glDrawArrays.

Drawing textured polygons with libgdx

I'm having a problem with my rendering cycle using libgdx, basically I need to fill an area with a square texture, and the last part of this area may be smaller or with a different shape than the texture, so it means that i need to render a quad of arbitrary form and slap the texture on it, cutting the parts I don't need.
I'm a bit lost on how to do this, so far I've seen that the PolygonRegion and PolygonSpriteBatch might do it for me, but I'm a bit wary of instancing a new heavy object I'll use only on one object.
Is there any alternative? Perhaps the Mesh class but i'd like to be certain.
I suggest using a Mesh to define exactly what region you want. Defining the vertex points and mapping those to the texture coordinates is a bit fiddly, but its good to know what's going on underneath some of the higher level APIs (like the *Batch bits). Additionally, the *Batch APIs are designed to share the weight of uploading a single texture across multiple objects, which sounds like it might not apply in this case. (On the other hand, even if the Batch objects are a bit "heavyweight", they may not actually be a problem in practice.)
Another approach to consider is to render the object as a square mesh, but to define your texture with transparent pixels for all the pixels outside the region. (I'm assuming the non-square shape is something you can know offline, and isn't dynamic.)
It isn't a big problem if you instantiate PolygonSpriteBatch for that purpose. The object mainly contains geometric data for buffered geometry. Of course you will need to care about correct rendering order calling flush or end when needed.
Mesh is another option but it can be a bit more work because you need to provide vertices and texture coordinates there manually.
From performance point of view rendering of one sprite is slightly faster with Mesh. I'm not sure if difference affects fps somehow in your case.
EDIT: forgot to mention, if you use SpriteBatch for rendering one object, don't use default constructor it reserves a lot of memory.

Categories

Resources