Best way to render a non-cubical sandbox game? - java

In my game, the world is made of cubes, but the cubes are divided into 5 parts: a tetrahedron and 4 corners. Each type of block has two colors. This is what a block might look like if one corner was cut, although each corner/face may have different colors from the rest.
The problem is, on the tetrahedral faces, I want the edges between the triangles to be seamless. So I can't use textures. (I could, but they would need to be high-res, and if I want to animate the colors (for example on water) this is not an option).
I've found these approaches:
Drawing each triangle on each tetrahedral face, then each square on each cubical face (using a VBO and all that stuff)
Too many polys! Lag ensues. And this was only rendering the tetrahedrals.
Using a fragment shader on world geometry
The math is simple: for each axis, find if the point is less than 0.5 within the cube and xor the results. This determines which color to use. I got lag, but I think my code is bad.
3D textures on world geometry
This seems to be the best option given how perfectly it matches my situation, but I really don't know.
Using instanced geometry with any of the above
I'm not sure about this one; I've read instancing can be slow on large scales. I would need 31 meshes, or more if I want to optimize for skipping hidden surfaces (which is probably unnecessary anyways).
Using a geometry shader
I've read geometry shaders don't perform well on large scales.
Which of these options would be the most efficient? I think using 3d and 2d textures might be the best option, but if I get lag I want to be sure it's because I'm using bad code not an inefficient approach.
Edit: Here's my shader code
#version 150 core
in vec4 pass_Position;
in vec4 pass_Color1;
in vec4 pass_Color2;
out vec4 out_Color;
void main(void) {
if ((mod(abs(pass_Position.x),1f)<=0.5f)^^(mod(abs(pass_Position.y),1f)<=0.5f)^^(mod(abs(pass_Position.z),1f)<=0.5f)) out_Color = pass_Color1;
else out_Color = pass_Color2;
}

The problem is, on the tetrahedral faces, I want the edges between the triangles to be seamless. So I can't use textures. (I could, but they would need to be high-res, and if I want to animate the colors (for example on water) this is not an option).
That's not necessarily the case. Remember that OpenGL doesn't see whole objects, but just individual triangles. So when rendering that cut face, it's in no way different to just render its flat, "fleshless" counterpart.
Any hard edge on the inner tetrahedron doesn't suffer from a texture crease as the geometrical edge is much stronger. So what I'd do is to have a separate 2D planar texture space aligned with the tetrahedral surfaces, which is shared by all faces coplanar to this (on a side note: applying this you could generate the texture coordinates using a vertex shader from the vertex position).
That being said: Simple 2D flat textures will eventually hit some limitations. Since you're effectively implementing a variant of an implicit surface tesselator (with the scalar field creating the surface being binary valued) it makes sense to think about procedural volumetric texture generation in the fragment shader.

Related

efficient usage of vbo for 2d sprites[opengl/android]

I'm working on my own game engine these days, and I'd like to make the rendering process as efficient as I can. With "immidiate mode" I found that it's very easy to implement the features I want to include.
Here's the list:
transforming(translation,rotation, scaling, pivot)
parenting (child sprites are affected by parent sprite e.g transform)
simple vector graphic - well this isn't that important now
depth management
But with VBOs and shaders it's quite hard to determine a good rendering structure. At first I put four vertices in a VBO and transformed it with matrix(gluniform), but many people said this is the worst way. So, I'd like to hear your general ideas about efficiently implement those features and how I should VBOs.
You could have one square VBO that you use for all of your sprites by scaling it for the width and height and transforming it with matrices and binding the right texture for each sprite. Child sprites could multiply their matrices with the matrices of their parent sprite. Depth management can be done with the depth buffer, just glEnable(GL_DEPTH_TEST) and GL_LEQUAL depth function and translate along the z-axis to what layer you want it drawn on. You probably don't even need to worry about doing everything the best and fastest way for just 2D sprites anyway.

Trying to achieve dynamic lighting in a tiled 2D isometric environment using Java2D

I am trying write some lighting code for a Java2D isometric game I am writing - I have found a few algorithms I want to try implementing - one of which I found here:
here
The problem is that this sort of algorithm would require some optimal pixel-shading effect that I haven't found a way of achieving via Java2D. Preferably some method via the graphics hardware but if that isn't possible - at least a method of achieving the same effect quickly in software.
If that isn't possible, could someone direct me to a more optimal algorithm with Java2D in mind? I have considered per-tile lighting - however I find the drawPolygon method isn't hardware accelerated and thus performs very slowly.
I want to try and avoid native dependencies or the requirement for elevated permissions in an applet.
Thanks
I did a lot of research since I posted this question - there are tons of alternatives and JavaFX does intend (on a later release) to include its own shader language for those interested. There is also a ofcourse LWJGL that will allow you to load your own shaders onto the GPU.
However, if you're stuck in Java2D (as I am) it is still possible to implement lighting in an isometric game it is just 'awkward' because you cannot perform the light shading on a per-pixel level.
How it Looks:
I have achieved a (highly unpolished - after some polishing I can assure you it will look great) effect for casting shadows, depth sorting the light map, and applying the lighting without experiencing a drop in frame-rate. Here is how it looks:
You'll see in this screen-shot a diffuse light (not shaded in but that step I'd say is relatively easy in contrast to the steps to get there) casting shadows - the areas behind the entities that obstructs the light's passage BUT also in the bounds of the light's maximum fall-out is shaded in as the ambient lighting but in reality this area is passed to the lights rendering routine to factor in the amount of obstruction that has occurred so that the light can apply a prettier gradient (or fading effect of some sort.)
The current implementation of the diffuse lighting is to just simply render obstructed regions the ambient colour and render non-obstructed regions the light's colour - obviously though you'd apply a fading effect as you got further from the light (that part of the implementation I haven't done yet - but as I said it is relatively easy.)
How I did it:
I don't guarantee this is the most optimal method, but for those interested:
Essentially, this effect is achieved by using a lot of Java shape operations - the rendering of the light map is accelerated by using a VolatileImage.
When the light map is being generated, the render routine does the following:
Creates an Area object that contains a Rectangle that covers the
entirety of the screen. This area will contain your ambient
lighting.
It then iterates through the lights asking them what their
light-casting Area would be if there were no obstructions in the way.
It takes this area object and searches the world for Actors\Tiles
that are contained within that area that the light would be cast in.
For every tile that it finds that obstructs view in the light's casting area, it will calculate the difference in the light source's position and the obstruction's
position (essentially creating a vector that points AT the
obstruction from the light source - this is the direction you want to cast your shadow) This pointing vector (in world
space) needs to be translated to screen space.
Once that has been done, a perpendicular to that vector is taken and
normalized. This essentially gives you a line you can travel up or
down on by multiplying it by any given length to travel the given direction in. This vector is
perpendicular to the direction you want to cast your shadow over.
Almost done, you consturct a polygon that consists of four points.
The first two points are at the the base of the screen coordinate of
your obstruction's center point. To get the first point, you want to
travel up your perpendicular vector (calculated in 5) a quantity of
half your tile's height [ this is a relatively accurate
approximation though I think this part of the algorithm is slightly
incorrect - but it has no noticable decay on the visual effect] -
then ofcourse add to that the obstructions origin. To get the
second, you do the same but instead travel down.
The remainder of the two points are calculated exactly the same way -
only these points need to be projected outward in the direction of
your shadow's projection vector calculated in 4. - You can choose any large amount to project it outwards by - just as long as it reaches at least outside of you light's casting area (so if you just want to do it stupidly multiply your shadow projection vector by a factor of 10 and you should be safe)
From this polygon you just constructed, construct an area, and then
invoke the "intersect" method with your light's area as the first
argument - this will assure that your shadows area doesn't reach
outside of the bounds of the area that your light casts over.
Subtract from your light's casting the shadow area you constructed
above. At this point you now have two areas - the area where the
light casts unobstructed, and the area the light casts over
obstructed - if your Actors have a visibility obstruction factor
that you used to determine that a particular actor was obstructing
view - you also have the grade at which it obstructs the view that
you can apply later when you are drawing in the light effect (this will allow you to chose between a darker\brighter shade depending on how much light is being obstructed
Subtract from your ambient light area you constructed in (1) both
the light area, and the obstructed light area so you don't apply
the ambient light to areas where the lighting effect will take over
and render into
Now you need to merge your light map with your depth-buffered world's render routine
Now that you've rendered you're light map and it is contained inside of a volatile image, you need to throw it into your world's render routine and depth-sorting algorithm. Since the back-buffer and the light map are both volatileimages, rendering the light map over the world is relatively optimal.
You need to construct a polygon that is essentially a strip that contains what a vertical strip of your world tiles would be rendered into (look at my screen shot, you'll see an array of thin diagonal lines seperating these strips. These strips are what I am referring). You can than render parts of this light map strip by strip (render it over the strip after you've rendered the last tile in that strip since - obviously - the light map has to be applied over the map). You can use the same image-map just use that strip as a clip for Graphics - you will need to translate that strip polygon down per render of a strip.
Anyway, like I said I don't guarantee this is the most optimal way - but so far it is working fine for me.
The light map is applied p

How exactly does deferred shading work in LWJGL?

I want to start a deferred shading project with GLSL , Java & openGl
1. How does a deferred rendering pipeline works, does it render the scene for each image?
For example when I want to create a specular, blur and shadow texture, do I need to render the scene for each of these textures.
I've seen some code snippets and there where no multiple render loops.
2. What is a Geometry buffer and what does it do? Is it something like a storage for scene data that I can draw to a texture without rendering again?
To add something more specific so you might get started. You need FBOs with multiple attachments and a way for your shader to write to multiple FBO attachments. Google glDrawBuffers. Your FBO attachments also needs to be textures so the information can be passed to a shader. The FBO attachments should be the same size as the screen you are rendering to. There are many ways to approach this. Here is one example.
You need two FBOs
Geometry Buffer
1. Diffuse (GL_RGBA)
2. Normal Buffer (GL_RGB16F)
3. Position Buffer (GL_RGB32F)
4. Depth Buffer
Note that 3) is a huge waste since we can use the the depth buffer and the projection to reconstruct the position. This is a lot cheaper. Having the position buffer to begin with is a good start at least. Attack one problem at a time.
The 2) normal buffer can also be compressed more.
Light Accumulation Buffer
1. Light Buffer (GL_RGBA)
2. Depth Buffer
The depth buffer attachment in this FBO should be the same attachment as in the geometry buffer. We might not use this depth buffer information in this example, but you will need it sooner or later. It will always contain the depth information from the first stage.
How do we render this stuff?
We start by rendering our scene with very simple shaders. The purpose of these are mainly to fill the geometry buffer. We simply draw all our geometry with a very simple shader filling up the geometry buffer. For simplicity I use 120 shaders and no texture mapping (all though that is super trivial to add).
Vertex Shader :
#version 120
varying vec3 normal;
varying vec4 position;
void main( void )
{
normal = normalize(gl_NormalMatrix * gl_Normal);
position = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader :
#version 120
uniform vec4 objectColor; // Color of the object you are drawing
varying vec3 normal;
varying vec4 position;
void main( void )
{
// Use glDrawBuffers to configure multiple render targets
gl_FragData[0] = objectColor; // Diffuse
gl_FragData[1] = vec4(normalize(normals.xyz), 0.0); // normals
gl_FragData[2] = vec4(position.xyz, 0.0); // Position
}
We have now for example drawn 20 objects to out geometry buffer with different color. If we look at the diffuse buffer, it's a pretty dull image with plain colors (or plain textures without lighting), but we still have the view position, normal and depth of each single fragment. This will be valuable information for is in the next stage when doing the lighting.
Light Accumulation
Now we switch to our light accumulation buffer and it is time to do some light magic. For each single light we are going to draw to our light accumulation buffer with additive blending enabled. How you do this is not that important for the result as long as you cover all the fragments affected by the light. You can do this initially by drawing a full screen quad, but that is very costly. We will only cover point lights, but this more than sufficient to cover the simple lighting principle (simple point lights are extremely trivial to make). A simple way is to draw a cube or a low poly sphere (light volume) at the light position scaled by the light radius. This makes rendering tons of small lights way more efficient.. but don't worry about performance now. Fullscreen quad will do the trick just fine.
Now, the simple principle is :
Each fragment has a stored x,y,z position we simply get with a texture fetch
We pass in the position of the light
We pass in the radius of the light
We can know if the fragment is affected by the light simply by measuring the distance from the value in the position buffer and the light position
From there on it's pretty standard light calculations
Fragment shader :
(This shader works for anything. Light volumes, full screen quads.. whatever)
#version 120
uniform sampler2D diffuseBuffer;
uniform sampler2D positionBuffer;
uniform sampler2D normalBuffer;
uniform float lightRadius; // Radius of our point light
uniform vec3 lightPos; // Position of our point light
uniform vec4 lightColor; // Color of our light
uniform vec2 screensize; // screen resolution
void main()
{
// VU for the current fragment
vec2 uv = vec2(gl_FragCoord.x / screensize.x, gl_FragCoord.y / screensize.y);
// Read data from our gbuffer (sent in as textures)
vec4 diffuse_g = texture2D(diffuseBuffer, uv);
vec4 position_g = texture2D(positionBuffer, uv);
vec4 gnormal_g = texture2D(normalBuffer, uv);
// Distance from the light center and the current pixel
float distance = length(lightPos - position_g.xyz);
// If the fragment is NOT affecter by the light we discard it!
// PS : Don't kill me for using discard. This is for simplicity.
if(distance > lightRadius) discard;
// Calculate the intensity value this light will affect the fragment (Standard light stuff!)
... Use lightPos and position_g to calculate the light normal ..
... Do standard dot product of light normal and normal_g ...
... Just standard light stuff ...
// Super simple attenuation placeholder
float attenuation = 1.0 - (distance / lightRadius);
gl_FragColor = diffuse_g * lightColor * attenuation * <multiplier from light calculation>;
}
We repeat this for each light. The order the lights are rendered doesn't matter since the result will always be the same with additive blending. You can also do it much simpler by accumulating only light intensity. In theory you should already have the final lit result in the light accumulation buffer, but you might want to to additional adjustments.
Combine
You might want to adjust a few things. Ambient? Color correction? Fog? Other post processing stuff. You can combine the light accumulation buffer and the diffuse buffer with some adjustments. We kind of already did that in the light stage, but if you only saved light intensity, you will have to do a simple diffuse * light combine here.
Normally just a full screen quad that renders the final result to the screen.
More Stuff
As mentioned earlier we want to get rid of the position buffer. Use the depth buffer with your projection to reconstruct the position.
You don't need to use light volumes. Some prefer to simply render a quad large enough to cover the area on the screen.
The example above do not cover issues like how to define unique materials for each object. There are many resources and variants of gbuffer formats out there. Some prefer to save a material index in the alpha channel (in the diffuse buffer), then lookup up a row in a texture to get material properties.
Directional lights and other light types affecting the entire scene can easily be handled by rendering a full screen quad into the light accumulation buffer
Spot lights are also nice to have and also fairly easy to implement
We probably want more light properties
We might want some way to weight how the diffuse and light buffer is combined to support ambient and emissive
There are many ways to store normals in a more compact way. You can for example use spherical coordinates to remove one value. There are tons of articles about deferred lighting and gbuffer formats out there. Looking at the formats people are using can give you some ideas. Just make sure your gbuffer don't get too fat.
Reconstructing the view position using the linearized depth value and your projection is not that hard. You need to construct a vector using the projection constants. Multiply it with your depth value (between 0 and 1) to get the view position. There are several articles out there. It's just two lines of code.
There's probably a lot to pick in in this post, but hopefully it shows the general principle. None one the shaders have been compiled. This was just converted from 3.3 to 1.2 by memory.
There several approaches to light accumulation. You might want to reduce the number of draw calls making VBOs with 1000 cubes and cones to batch-draw everything. With more modern GL versions you can also use the geometry shader to calculate a quad that would cover the light area for each light. Probably the best way is to use compute shaders, but that requires GL 4.3. The advantage here is that you can iterate all the light information and do one single write. There are also pseudo-compute methods were you divide the screen into a rough grid and assign a light list to each cell. This can be done only with a fragment shader, but requires you to build the light lists on the CPU and sending in the data to the shader though UBOs.
The compute shader method is by far the simplest one to make. It removes a lot of the complexity in the older methods to keep track and organize everything. Simply iterate the lights and do one single write to the framebuffer.
1) Deferred shading involves separating rendering the geometry for a scene and basically everything else into separate passes.
For example when I want to create a specular, blur and shadow texture, do I need to render the scene for each of these textures.
For the shadow texture, probably (if you're using shadow mapping, this can't be avoided). But for everything else:
No, which is why deferred shading can be so useful. In a deferred pipeline you render the geometry once and save the color, normal, and 3d location (The Geometry Buffer) for each pixel. This can be achieved in a couple different ways, but the most common is to use Frame Buffer Objects (FBOs) with multiple render targets (MRTs). When using FBOs for deferred shading you render the geomotry in exactly the same way you would render normally, except that you bind the FBO, use multiple outputs in your fragment shader (one for each render target), and don't calculate any lighting. You can read more about FBOs and MRTs on the OpenGL website or through a quick google search. Then to light your scene you would read this data in a shader and use it to compute lighting just like you would normally. The easiest way to do this (but not the best way) is to render a full screen quad and sample color, normal, and location textures for your scene.
2) The geometry buffer is all of the data necessary for lighting and other shading that will be done on the scene. It is created during the geometry pass (the only time when geometry needs to be rendered) and is typically a set of textures. Each texture is used as a render target (See above about FBOs and MRTs) when rendering the geometry. You typically have one texture for color, one for normals, and one for 3d location. It can also contain more data (like parameters for lighting) if necessary. That gives you all the data you need for each pixel to be lit during a lighting pass.
Pseudo code could look like this:
for all geometry {
render to FBO
}
for all lights {
read FBO and do lighting
}
//... here you can read the FBO and use it for anything!
The basic idea of deferred rendering is to separate the process of transforming the geometry of the meshes into locations on the target framebuffer, and giving the pixels of the target framebuffer their final colour.
The first step is to render the geometry in a way, that each pixel of the framebuffer receives information about the original geometry, i.e. location in either world or eye space (eye space is preferred), the transformed tangent space (normal, tangent, binormal), and other attributes depending on what's later required. This is the "geometry buffer" (answering your 2. question as well).
With the geometry buffer at hand, the precomputed geometry→pixel mapping can be reused for several, similar processing steps. For example if you want to render 50 light sources, you have to process only the geometry 50 times (which equals to rendering 100 triangles, which is childs play for a modern GPU), where for each iteration other parameters are used (light position, direction, shadow buffers, etc.). This is in contrast to regular multipass rendering, where for each iteration the whole geometry needs to be reprocessed.
And of course each pass may be used to render a different kind of shading process (glow, blur, bokeh, halos, etc.)
Then for each iteration pass the results are merged together into a composite image.

OpenGL - Pixel color at specific depth

I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.

OpenGL: Create a sky box?

I'm new to OpenGL. I'm using JOGL.
I would like to create a sky for my world that I can texture with clouds or stars. I'm not sure what the best way to do this is. My first instinct is to make a really big sphere with quadric orientation GLU_INSIDE, and texture that. Is there a better way?
A skybox is a pretty good way to go. You'll want to use a cube map for this. Basically, you render a cube around the camera and map a texture onto the inside of each face of the cube. I believe OpenGL may include this in its fixed function pipeline, but in case you're taking the shader approach (fixed function is deprecated anyway), you'll want to use cube map samplers (samplerCUBE in Cg, not sure about GLSL). When drawing the cube map, you also want to remove translation from the modelview matrix but keep the rotation (this causes the skybox to "follow" the camera but allows you to look around at different parts of the sky).
The best thing to do is actually draw the cube map after drawing all opaque objects. This may seem strange because by default the sky will block other objects, but you use the following trick (if using shaders) to avoid this: when writing the final output position in the vertex shader, instead of writing out .xyzw, write .xyww. This will force the sky to the far plane which causes it to be behind everything. The advantage to this is that there is absolutely 0 overdraw!
Yes.
Making a really big sphere has two major problems. First, you may encounter problems with clipping. The sky may disappear if it is outside of your far clipping distance. Additionally, objects that enter your sky box from a distance will visually pass through a very solid wall. Second, you are wasting a lot of polygons(and a lot of pain) for a very simple effect.
Most people actually use a small cube(Hence the name "Sky box"). You need to render the cube in the pre-pass with depth testing turned off. Thus, all objects will render on top of the cube regardless of their actual distance to you. Just make sure that the length of a side is greater than twice your near clipping distance, and you should be fine.
Spheres are nice to handle as they easily avoid distortions, corners etc. , which may be visible in some situations. Another possibility is a cylinder.
For a really high quality sky you can make a sky lighting simulation, setting the sphere colors depending on the time (=> sun position!) and direction, and add some clouds as 3D objects between the sky sphere and the view position.

Categories

Resources