I am working on a game scene with multiple objects that need multiple materials. I extensively searched online, but I could not find any satisfactory solution.
My scene will have like a river flowing by and the material there will require a separate shader anyway (it will combine many specular and normal maps into what would look like a river) then there is a terrain that would mix two (grass and sand textures) requiring another shader. There is also a player with hands and amour and all.
EDIT: Essentially I wish to find out the most efficient way of making the most flexible multiple material/shader implementation.
Briefly, there is a lot of complex objects around requiring varied shaders. They are not many in number, but there is a lot of complexity.
So using glUseProgram() a lot of times dosn't seem like the brightest idea. Also Much of the shader code could be made univeral like point light calculation. Making a generic shader and using if's and state uniforms could possibly work, still requiring different shaders for the river and likewise diverging materials.
I basically don't understand the organization and implementation of such a generic system. I have used engines like Unreal or possibly Blender which use Node based materials and allowing the customization of every single material without much lag. How would such a system translate into base GPU code?
If you really face timing problems because of too many glUseProgram() calls, you might want to have a look at shader subroutines and use less but bigger programs. Before that, sort your data to change states only when needed (sort per shader then per material for example). I guess this is always a good practice anyway.
Honestly, I do not think your timing problems come from the use of too many programs. You might for example want to use frustum culling (to avoid sending geometry to the GPU that will be culled) and early z-culling (to avoid complex lighting computations for fragments that will be overriden). You can also use level of detail for complex geometries that are far away, thus do not need as much details.
Related
How would I go about modeling a turgid bag of fluid in Box 2D, i.e. what physics equations would be useful in modeling this? The bag of water could move when touched, but that is the only interaction. Any places equations or models would be much appreciated!
You could use a library for this. Google's LiquidFun is quite good http://google.github.io/liquidfun/
User99345's answer is the way I'd go to model a bag of liquid, but if you're wanting to use an unmodified Box2D library you can model it using instances of b2EdgeShape, b2RevoluteJoint, and b2CircleShape. Whether that's a good enough way to model it, you'll have to decide.
Additional basis/insight for this...
After seeing your question I put together a model for a bag of liquid as a Testbed demo in my dev branch of my fork of Box2D. The demo is called "Bag of Disks" because that's basically what the demo models and it's in the file BagOfDisks.hpp. The code uses a series of edge shapes connected by revolute joints to model a deformable container and fills it with circle shapes (called DiskShape in my fork) to model a liquid. If you build the library and Testbed of my fork you can see for yourself how the model looks.
As a model of a bag of liquid, I'm of the opinion that the code I whipped together has shortcomings like the following:
It must be computationally less efficient than using particle simulation like Google's LiquidFun does.
I don't believe joints as currently implemented in Box2D (or my fork of it), can always ensure containment. That's because I don't believe joints are able to 100% prevent alternative movement of the connected shapes.
The "bag" in the demo doesn't level evenly at it's top like I'd expect a normal bag of liquid to do.
The "bag" doesn't model a buoyancy force that I'd expect the top of a bag to do.
I imagine things can be done from a user level to improve the model's behavior but I suspect less can be done from a user level to improve the model's speed. By user level, I mean as a user of the Box2D library.
While my fork has made many changes to the Box2D library (especially in naming), I don't believe what I've done in the demo itself can't be reproduced pretty closely using the original Box2D library and syntax as I'd said in my first paragraph using instances of b2EdgeShape, b2RevoluteJoint, and b2CircleShape.
As to what physics equations would be useful in modeling this beyond the equations that Box2D already uses, I'm sorry to say that I have no idea at the moment. I am interested in that however and am looking into it as well. Physics equations for this are available of course but the closest related work that I'm aware of from a Box2D user level is what iforce2d put together in his Buoyancy web page.
Hope this answer contributes helpfully to what's already been said.
I am creating my own ray-tracer for fun and learning. One of the features I want to add is the ability to use SVG files as textures directly.
The simple and straight forward way to do this would be to simply render the SVG to another more "lookup-friendly" raster format first and feed that as a regular texture to be used during ray tracing. However I don't want to do that.
Instead I want to actually "trace" the SVG itself directly. So I would like to know are there any SVG libraries for Java that has an API that would lend it self to be used in this manner? It would need some call that takes as input a float point2D[] and returns float colorRGBA[] as an output.
If not what would be the best approach to do this?
I don't know much about Java libraries but most likely they do not suit you too well. The main reasons are:
Most libraries are meant to render pictures and are unsuitable for random look up.
More importantly the SVG texture data does not filter naturally all that well. We know how to build good mipmaps of images and filtering them is easy reducing pressure on your raytracers super sampling need.
Then there is the complexity of SVG itself, something like SVG filters (blur) will be prohibitively expensive to calculate in a random sampling context.
Now if we sidestep option three (3), which is indeed a quite hard problem as it really requires you to do rasterization or something other out of the ordinary. Then there are algorithmic options:
You can actually raytrace the SVG in 2D. This would probably work out well for you as your doing a ray tracer anyway. So all you need to do is shoot rays inside the 2d model and see if your sample point is inside the shape or not. Just shoot a ray to a arbitrary direction and count intersections to see if your inside the shape or not. Simply your ray will intersect the shape a odd number of times if your inside the shape.
Image 1: Intersection testing. (originally posted here) Glancing hits must be excluded (most tracers consider that a miss anyway for this reason even in 3D)
Pairing this tracing with a BSP-Tree or a Quadtree should make this sufficiently performant. All you need is to implement a similar shader support as your standard raytracer and you can handle alpha and gradfients + some of the filters like noise. But sill no luck with blurs without a lot of sampling.
You can also use a texture as a precomputed result for a mipmap and only ask for rendering for a small view box when reaching a mipmap level that does not exist yet using a standard library with a limited window size. This would naturally work better for you and by caching the data you can remove the number of calls. Without the caching it might be too expensive to use. But you can try, (if the system supports clipping your svg). Thsi may not be as easy as it sounds.
You can use your 3d raytracer for this, so instead you shoot rays head on. All you need to do is implement a tracing set logic and you can then triangulate the SVG and use your normal tracing logic to do this. How to describe bezier curves as triangles is described in this nvidia publication. So your changes might be minimal.
Hope this helps even if its not a use this library answer. There is a reason why you do not see this implemented very often.
I am creating a voxel engine. I have created chunk generation in addition to some simple simplex noise integration but it is extremely laggy due to all of the face of each quad being drawn even the ones you can't see.
To my understanding this is commonly dealt with using ray casting of which I understand the basic theory: you draw several rays from the camera and check for collision, if no collision is found then the face is not within view and therefor should not be rendered. Even though I understand the theory of it all I haven't yet been able to implement it due to lack of prior knowledge and what I found on the internet lacking i.e. they give the code but not the knowledge.
The steps I could imagine I need to take are as follows:
Learn OpenCL (though I haven't used it before to my understanding it allows you to better make use of your graphics card by the use of 'kernels' which I mentally associate with OpenGL 'shaders').
Learn the theory and math behind Ray casting. I have also have heard of ray tracing which I believe has a different use.
Learn how to use this information to not render hidden faces. Assuming I get a working implementation how would I go about telling OpenGL not to render the hidden faces? The cube is one object and to the best of my knowledge there is no way to manipulate the faces of an object in OpenGL only the vertices. Also how would OpenCL communicate with OpenGL? OpenCL isn't a graphics api so it isn't capable of drawing the rays.
Could anyone point me in the right direction? I also believe that there are pure OpenGL implementations as well but I would like to keep the OpenCL aspect as this is a learning experience.
I wouldn't recommend working with OpenCL or OpenGL in developing your first game, both will slow you down extraordinarily because each requires a different mindset.
Well done though on getting as far as you have.
You mentioned that you are currently rendering all quads all the time which you want to remove hidden ones. I have written a voxel engine for practice too and ran into this issue and spent a lot of time thinking how to fix it. My solution was to not draw faces that are facing another voxel.
Imagine two voxels next to each other, the two faces that are touching cant be seen and don't need to be rendered.
However, this will not make any difference if your method of talking with the GPU is the bottleneck. You will have to use buffered methods, I used Display Lists but it is also possible (but harder) to use VBOs.
I'd also recommend grouping large numbers of voxels into chunks for many reasons. Then you only need to recalculate the visible quads on the chunk that changed.
Regarding Ray Casting, If you adopt the chunk system I just described calculating visible entire chucks will be easier. E.g Chunks behind the player don't need to be rendered and that can be calculated with just one dot product calculation per chunk.
Learn OpenCL (though I haven't used it before to my understanding it
allows you to better make use of your graphics card by the use of
'kernels' which I mentally associate with OpenGL 'shaders').
Amd app sdk has many examples/samples from sorting numbers to doing 3d-fluid calculations on a teapot. You can also use cpu with opencl but multiple cpus can bee seen as single device. Also Nvidia and jocl and lwjgl has samples waiting to be reverese-engineered.
Learn the theory and math behind Ray casting. I have also have heard
of ray tracing which I believe has a different use
I only know that ray casting becomes a tracing if those rays cast new rays. Lots of vector algebra like cross products, dot products, normalizations of direction vectors, 3x3 4x4 matrix multiplications and many more. Higher order recursivity is bad for gpu. Try with iterative versions.
Learn how to use this information to not render hidden faces.
You can sort the distances of surface primitives that a ray intersecs and get the smallest distance one. Others shouldnt be seen if there is no refraction on that surface. Using an acceleration structure (bounded bolume hierarchy,..) helps.
The cube is one object and to the best of my knowledge there is no way
to manipulate the faces of an object in OpenGL only the vertices.
Generate in opencl, pass it to opengl, faster than immediate mode.
Also how would OpenCL communicate with OpenGL? OpenCL isn't a graphics
api so it isn't capable of drawing the rays.
Create the context with "sharing" properties to be able to use gl-cl "interop". This enables opencl-opengl communication get as fast as gpu-vram (300 GB/s for high end). Then use gl buffers as cl buffers in this context with proper synchronizations between cl and gl.(glFinish() compute() clFinish() drawArrays())
If it is not interop then communications will be as slow as pci-e bandwidth. Then generating from cpu becomes faster if compute to data ratio is low.
If there are multiple gpus to play with, then you should pack your data as short as possible. Check endianness, alignment of structures. Dont forget to define opencl(device)-side structures if there are any in host side and they must be 1-1 compatible.
I'm creating a game along the lines of Minicraft. I posted a question about how I should make a terrain similar to the one in the game here and user by the name of Quirliom posted an answer referring to what is called cellular automata.
I had absolutely no clue what it was, let alone how to do it. I did look it up and see what it was. But I have yet to find out how to do it. Could somebody please explain how to do it and how it works, perhaps a link or two or even some source codes/ examples.
For the theory, check out http://en.wikipedia.org/wiki/Book:Cellular_Automata. Once you have a sense of what cellular automata are in general, the next step is finding sources on their application to landscape generation (a pretty non-standard but not unheard of use); I suspect the initial theory readthrough will give you a pretty good sense on implementation techniques.
Formally, cellular automata are a subclass of dynamical system where space and time are discrete.
Depending on the model considered, some properties may or may not apply:
The component of the model are connected by a regular graph that is invariant by translation, rotation, etc.
Given S the state space, the updating rule is a function F(S^n) -> S where S^n is given by the neighborhood of a cell.
The updating rule is the same for all component.
The updating rule apply to all cells simultaneously, building states t+1 from states t.
Generally, cellular automata are good models to simulate a dynamical environment (sand, brownian motion, wildfires) because they allow large size and computation speed, due to their extreme simplicity.
If you want an entry in the world of cellular automata, I recommend you look up the Game of Life by Conway, find a tutorial and implement it.
I know that openGL works wonders if you send textures to it that are static and rarely change for example like tiles. But not when you have constantly changing sprites?
Is it possible to create games like abduction purely from canvas and what would be its performance?
It is possible to create games like abduction using canvas, however eventually you are going to hit a stumbling block in terms of performance.
OpenGL whether moving or static will handle images exponentially faster, by accessing buffers and pixel processors on the gc capable of manipulating large arrays of pixels at once.
However OpenGL isn't easy it will take time to learn, and you will need to learn it's language. This said you will find tons of information on using openGL, I highly recommend the Lightweight Java Game Library (LWJGL) http://lwjgl.org/ and NeHe tutorials http://nehe.gamedev.net/.
Anyway take a look see what you think, it'll be hard but as with all hard work it'll pay off eventually.
Hope this helps.