I'am working on a game with an low poly style. I have been searching for precedural terrain generation but there where only 3d or tile based tutorials.
INFO:
Langue is Java using the libGDX framework and released on android.
The terrain will be generated procedural while the game is running using a chunk loading system (for an infinite world).
The game terrain will be saved. And should be reloaded with the same terrain.
The Terrain can be convex (caves).
QUESTION:
Are there any good tutorials or libs?
If I use chunks to only load parts of the map some triangles vertices will contain 2 differen chunks how to manage these?
I have read that I shouldn't save / load a chunk to a file. But just generate the terrain using a seed. How do I tell the generator to not generate something that was removed previously?
What about entities save them to a file?
A few bits of general advice I have
Possible vertex overlap could be accounted for by defining and saving chunks with padding to account for the maximum a vertex could go outside of its chunk. Minecraft, for example, never had this problem because cubes line up very nicely. You could consider changing the geometry you're using. For example: define the world as cubes, and then apply an effect to move all vertices pseudo-randomly and thus hide that you're generating using cubes.
I would generate all terrain using a seed instead of saving and
loading from a file except for in chunks where something has been
removed. Those chunks will need to be saved. You could overwrite seed chunks with these.
Just like you said in your question, handle entities by: saving them to a .properties file or something. I would using a LinkedList<> or array[] with an abstract parent class to keep track of them in-game.
Some videos about procedural generation in general:
https://www.youtube.com/watch?v=JdYkcrW8FBg
https://www.youtube.com/watch?v=0ORcSjvESrA
This is all pretty abstract information, but I didn't want to leave this without any response. Hopefully someone with more experience in the particular area can give you more practical insight.
Related
I am learning how to program an OpenGL game engine in Java. I've done loading models from files already but next up is heightmaps and I'm wondering what are the benefits of using a heightmap to generate 3D terrain as opposed to a list of Z-axis values or a 3D model? Does it depend on the detail of the terrain or is it just more efficient to use a heightmap?
Some that I can think of:
Heightmap pros:
Smaller file, since the game will generate it into memory at runtime.
More optimized algorithms for manipulation of an n^2 image?
Easier procedural generation.
Heightmap cons:
Problem with steep angles and overhangs are impossible.
Don't seem to be as accurate as models.
Are they that helpful for making terrain if you aren't procedurally generating it?
If you refer to a gray-scale/rgb/rgba image with height-values as heightmap, then you are mostly right.
list of Z-axis values
You can store your values however you want, you can even serialize/deserealize them, the only difference is that the image format is universal and is used in all the 3d software. So you probably want to make your pipeline based on the image heightmap format since you probably won't generate heightmaps procedurally yourself and will import them from some kind of 3d-software.
3D model
Heightmaps can be used only when f(x,z) has got one value. In this case I believe there's no reason to use 3d meshes instead of heightmaps.
Does it depend on the detail of the terrain or is it just more efficient to use a heightmap?
No, LOD and other stuff depends only on your implementation. It's just that in heightmaps you use 8 bits(grayscale) / 24 bits(RGB) / 32 bits(RGBA) per vertex and with 3d model you use 3x32 bits per vertex.
So, basically, heightmaps are used just to save some memory / importing expanses. And they are probably easier to generate, but I am not a 3d modeler so can't say for sure.
I am creating a pseudo-turn-based online strategy browser game where many people play in the same world over a long period(months). For this, I want to have a map of 64000x64000 tiles = 4 billion tiles. I need about 6-10 bytes of data per tile, making up a total of around 30GB of data for storing the map.
Each tile should have properties such as type(water, grass, desert, mountain), resource(wood, cows, gold) and playerBuilt(road, building)
The client will only ever need access to about 100x100 tiles at the same time.
I have handled the map on client side under control. The problem that I'm faced with is how to store, retrieve and modify information from this map on the server side.
Required functionality:
Create, store, and modify 64000x64000 tilemap.
Show 100x100 part of the map to the client.
Make modifications on the map such as roads, buildings, and depleted resources.
What I have considered so far:
Procedural generation: Procedurally generating whichever part of the map is needed on the fly. Making sure that given the same seed, it always generates the same map. The main problem I have with this is that there will be modifications to the map during the game. Note: Less than 1% of the tiles would be modified during the game and it could be possible to store modifications with coordinates in an outside array. Loading them on top of the procedural generation.
Databases: Generating the map at the start of the game and storing it in a database. A friend advised me against this for such a huge tile map and told me that I'd probably want to store it in memory instead.
Keeping it all in memory on the server side: Keeping it in memory in a data structure. Seems like a nice way to do it if the map was smaller but for 4 billion tiles that would be a lot to keep in memory.
I was planning on using java+mysql for back-end for this project. I'm still in early phases and open to change technology if needed.
My question is: Which of the three approaches above seem viable and/or are there other ways to do it which I have not considered?
Depends on:
how much RAM you got (or player/users got)
Is most of the tile map empty (sparse) ? Opposite is dense.
Is there a default terrain (like empty or water ?)
If sparse, use a hashmap instead of a 2D array.
If dense it will be much more challenging and you may need to use a database or some special data structures + cache.
You may detect hot zones and keep them in memory for a while, dead zones (no players there, no activity ...) can be stored in the database and read on demand.
You may also load data in several passes: first just the terrain, then other objects... each layer could be stored in a different way. For example the terrain could be perlin noise generated + another layer which can be modified.
I have seen a few programs and games that store their data in an indexed file, and they load their data from that file which they usually call cache.
I want to be able to load my data in this way:
final int SPRITES_INDEX = 3;
List<Sprite> sprites = (List<Sprite>) cache.loadIndex(SPRITES_INDEX);
Does any know how it's done and why its done this way? or is there a name for this method of storing data?
You should look up "resources" in "jar" files. That's what is commonly used for this job in the java world. Normally a jar file is just a zip file, which is sequential, but many years ago they added the ability to have indexed jar files, which provide random access to their contents.
You can begin here: http://docs.oracle.com/javase/1.5.0/docs/tooldocs/windows/jar.html
(Look for the "i" option which adds an index to the jar file.)
At least for images/rendering, this is called Texture Packing, and is done because OpenGL "binds" images before rendering them, and this binding can be expensive, processing-wise.
Packaging the textures inside a larger image allows the game/app to bind only once, and then, based on an index of predefined pixel coordinates, render only parts of the larger image, as if they were separate smaller images.
I suggest taking a look at LibGDX's TexturePacker.
Extract:
In OpenGL, a texture is bound, some drawing is done, another texture
is bound, more drawing is done, etc. Binding the texture is relatively
expensive, so it is ideal to store many smaller images on a larger
image, bind the larger texture once, then draw portions of it many
times. libgdx has a TexturePacker class which is a command line
application that packs many smaller images on to larger images. It
stores the locations of the smaller images so they are easily
referenced by name in your application using the TextureAtlas class.
TexturePacker uses multiple packing algorithms but the most important
is based on the maximal rectangles algorithm. It also uses brute
force, packing with numerous heuristics at various sizes and then
choosing the most efficient result.
Note that this is a type of, and similar but not identical to, the general concept of Caching.
In computer programming, caching consists of dedicating a section of memory to storing recently or frequently used data, to avoid having to recreate/reprocess that data every time it is needed/accessed.
As such, it's similar, but not the same to the concept of texture-packing, which instead is done not to recreate/reprocess the images themselves, but rather to avoid an expensive complication/process further down the line.
Considering the gaming context of the question, it's also important to note that another concept, this time much closer to caching, exists. It's called Pooling, and consists of creating a cache (in this case called Pool) of pre-created/pre-processed instances of objects that can be expected to be needed in varied quantity over time, for examples the units in an RTS game, to avoid having to create them when they are needed (in turn to avoid sudden "weight" in processing, which leads to sudden drops in FPS, or "stutters", in the context of a game).
I am creating my own ray-tracer for fun and learning. One of the features I want to add is the ability to use SVG files as textures directly.
The simple and straight forward way to do this would be to simply render the SVG to another more "lookup-friendly" raster format first and feed that as a regular texture to be used during ray tracing. However I don't want to do that.
Instead I want to actually "trace" the SVG itself directly. So I would like to know are there any SVG libraries for Java that has an API that would lend it self to be used in this manner? It would need some call that takes as input a float point2D[] and returns float colorRGBA[] as an output.
If not what would be the best approach to do this?
I don't know much about Java libraries but most likely they do not suit you too well. The main reasons are:
Most libraries are meant to render pictures and are unsuitable for random look up.
More importantly the SVG texture data does not filter naturally all that well. We know how to build good mipmaps of images and filtering them is easy reducing pressure on your raytracers super sampling need.
Then there is the complexity of SVG itself, something like SVG filters (blur) will be prohibitively expensive to calculate in a random sampling context.
Now if we sidestep option three (3), which is indeed a quite hard problem as it really requires you to do rasterization or something other out of the ordinary. Then there are algorithmic options:
You can actually raytrace the SVG in 2D. This would probably work out well for you as your doing a ray tracer anyway. So all you need to do is shoot rays inside the 2d model and see if your sample point is inside the shape or not. Just shoot a ray to a arbitrary direction and count intersections to see if your inside the shape or not. Simply your ray will intersect the shape a odd number of times if your inside the shape.
Image 1: Intersection testing. (originally posted here) Glancing hits must be excluded (most tracers consider that a miss anyway for this reason even in 3D)
Pairing this tracing with a BSP-Tree or a Quadtree should make this sufficiently performant. All you need is to implement a similar shader support as your standard raytracer and you can handle alpha and gradfients + some of the filters like noise. But sill no luck with blurs without a lot of sampling.
You can also use a texture as a precomputed result for a mipmap and only ask for rendering for a small view box when reaching a mipmap level that does not exist yet using a standard library with a limited window size. This would naturally work better for you and by caching the data you can remove the number of calls. Without the caching it might be too expensive to use. But you can try, (if the system supports clipping your svg). Thsi may not be as easy as it sounds.
You can use your 3d raytracer for this, so instead you shoot rays head on. All you need to do is implement a tracing set logic and you can then triangulate the SVG and use your normal tracing logic to do this. How to describe bezier curves as triangles is described in this nvidia publication. So your changes might be minimal.
Hope this helps even if its not a use this library answer. There is a reason why you do not see this implemented very often.
I am creating a voxel engine. I have created chunk generation in addition to some simple simplex noise integration but it is extremely laggy due to all of the face of each quad being drawn even the ones you can't see.
To my understanding this is commonly dealt with using ray casting of which I understand the basic theory: you draw several rays from the camera and check for collision, if no collision is found then the face is not within view and therefor should not be rendered. Even though I understand the theory of it all I haven't yet been able to implement it due to lack of prior knowledge and what I found on the internet lacking i.e. they give the code but not the knowledge.
The steps I could imagine I need to take are as follows:
Learn OpenCL (though I haven't used it before to my understanding it allows you to better make use of your graphics card by the use of 'kernels' which I mentally associate with OpenGL 'shaders').
Learn the theory and math behind Ray casting. I have also have heard of ray tracing which I believe has a different use.
Learn how to use this information to not render hidden faces. Assuming I get a working implementation how would I go about telling OpenGL not to render the hidden faces? The cube is one object and to the best of my knowledge there is no way to manipulate the faces of an object in OpenGL only the vertices. Also how would OpenCL communicate with OpenGL? OpenCL isn't a graphics api so it isn't capable of drawing the rays.
Could anyone point me in the right direction? I also believe that there are pure OpenGL implementations as well but I would like to keep the OpenCL aspect as this is a learning experience.
I wouldn't recommend working with OpenCL or OpenGL in developing your first game, both will slow you down extraordinarily because each requires a different mindset.
Well done though on getting as far as you have.
You mentioned that you are currently rendering all quads all the time which you want to remove hidden ones. I have written a voxel engine for practice too and ran into this issue and spent a lot of time thinking how to fix it. My solution was to not draw faces that are facing another voxel.
Imagine two voxels next to each other, the two faces that are touching cant be seen and don't need to be rendered.
However, this will not make any difference if your method of talking with the GPU is the bottleneck. You will have to use buffered methods, I used Display Lists but it is also possible (but harder) to use VBOs.
I'd also recommend grouping large numbers of voxels into chunks for many reasons. Then you only need to recalculate the visible quads on the chunk that changed.
Regarding Ray Casting, If you adopt the chunk system I just described calculating visible entire chucks will be easier. E.g Chunks behind the player don't need to be rendered and that can be calculated with just one dot product calculation per chunk.
Learn OpenCL (though I haven't used it before to my understanding it
allows you to better make use of your graphics card by the use of
'kernels' which I mentally associate with OpenGL 'shaders').
Amd app sdk has many examples/samples from sorting numbers to doing 3d-fluid calculations on a teapot. You can also use cpu with opencl but multiple cpus can bee seen as single device. Also Nvidia and jocl and lwjgl has samples waiting to be reverese-engineered.
Learn the theory and math behind Ray casting. I have also have heard
of ray tracing which I believe has a different use
I only know that ray casting becomes a tracing if those rays cast new rays. Lots of vector algebra like cross products, dot products, normalizations of direction vectors, 3x3 4x4 matrix multiplications and many more. Higher order recursivity is bad for gpu. Try with iterative versions.
Learn how to use this information to not render hidden faces.
You can sort the distances of surface primitives that a ray intersecs and get the smallest distance one. Others shouldnt be seen if there is no refraction on that surface. Using an acceleration structure (bounded bolume hierarchy,..) helps.
The cube is one object and to the best of my knowledge there is no way
to manipulate the faces of an object in OpenGL only the vertices.
Generate in opencl, pass it to opengl, faster than immediate mode.
Also how would OpenCL communicate with OpenGL? OpenCL isn't a graphics
api so it isn't capable of drawing the rays.
Create the context with "sharing" properties to be able to use gl-cl "interop". This enables opencl-opengl communication get as fast as gpu-vram (300 GB/s for high end). Then use gl buffers as cl buffers in this context with proper synchronizations between cl and gl.(glFinish() compute() clFinish() drawArrays())
If it is not interop then communications will be as slow as pci-e bandwidth. Then generating from cpu becomes faster if compute to data ratio is low.
If there are multiple gpus to play with, then you should pack your data as short as possible. Check endianness, alignment of structures. Dont forget to define opencl(device)-side structures if there are any in host side and they must be 1-1 compatible.