I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.
Related
I've recently been looking into LibGDX and seem to have hit a wall, seen in the picture, the blue dot represents the users finger, the map generation it self is where i seem to get stuck, does LibGDX provide a method of dynamically drawing curved objects? I could simply generate them myself as images but then the image is hugely stretched to the point of the gap for the finger can fit 3! But also would need to be 1000's of PX tall to accommodate the whole level design.
Is it such that i should be drawing hundreds of polygons close together to make a curved line?
On a side not i'll need a way of determining when the object has from bottom to top so i can generate another 'chunk' of map.
You don't need hundreds of polygons to make a curve like you drew. You could get away with 40 quads on the left, and 40 on the right, and it would look pretty smooth. Raise that to 100 on each side and it will look almost perfectly smooth, and no modern device is going to have any trouble running that at 60fps.
You could use the Mesh class to generate a procedural mesh for each side. You can make the mesh stay in one spot, locked to the camera, and modify it's vertices and UVs to make it look like you are panning down an infinitely long corridor. This will take a fair amount of math up front but should be smooth sailing once you have that down.
Basically, your level design could be based on some kind of equation that takes Y offset as an input. Or it could be a long array of offsets, and you could use a spline equation or linear equation to interpolate between them. The output would be the UV and X coordinates which can be used to update each of the vertices of your two meshes.
You can use the vertex shader to efficiently update the UV coordinates, using a constant offset uniform parameter that you update each frame. That way you don't have to move UV data to the GPU every frame.
For the vertex positions, use your Mesh's underlying float[] and call setVertices() each frame to update it. Info here.
Actually, it might look better if you leave the UV's and the X positions alone, and just scroll the Y positions up. Keep a couple quads of padding off top and bottom of screen, and just move the top quad to the bottom after it scrolls off screen.
How about creating a set of curved forms that can be put together variably. Like the gap in the middle will at the top and bottom of each image be in the middle (with the same curvature at end and beginning points)...
And inbetween the start and end points you can go crazy on the shape.
And finally, you can randomly put those images together and get an endless world.
If you don't want to stop in the middle each time, you could also have like three entry and exit points (left, middle, right)... and after an image that ends left, you of course need to add an image that starts left, but might end somewhere else...
I am trying write some lighting code for a Java2D isometric game I am writing - I have found a few algorithms I want to try implementing - one of which I found here:
here
The problem is that this sort of algorithm would require some optimal pixel-shading effect that I haven't found a way of achieving via Java2D. Preferably some method via the graphics hardware but if that isn't possible - at least a method of achieving the same effect quickly in software.
If that isn't possible, could someone direct me to a more optimal algorithm with Java2D in mind? I have considered per-tile lighting - however I find the drawPolygon method isn't hardware accelerated and thus performs very slowly.
I want to try and avoid native dependencies or the requirement for elevated permissions in an applet.
Thanks
I did a lot of research since I posted this question - there are tons of alternatives and JavaFX does intend (on a later release) to include its own shader language for those interested. There is also a ofcourse LWJGL that will allow you to load your own shaders onto the GPU.
However, if you're stuck in Java2D (as I am) it is still possible to implement lighting in an isometric game it is just 'awkward' because you cannot perform the light shading on a per-pixel level.
How it Looks:
I have achieved a (highly unpolished - after some polishing I can assure you it will look great) effect for casting shadows, depth sorting the light map, and applying the lighting without experiencing a drop in frame-rate. Here is how it looks:
You'll see in this screen-shot a diffuse light (not shaded in but that step I'd say is relatively easy in contrast to the steps to get there) casting shadows - the areas behind the entities that obstructs the light's passage BUT also in the bounds of the light's maximum fall-out is shaded in as the ambient lighting but in reality this area is passed to the lights rendering routine to factor in the amount of obstruction that has occurred so that the light can apply a prettier gradient (or fading effect of some sort.)
The current implementation of the diffuse lighting is to just simply render obstructed regions the ambient colour and render non-obstructed regions the light's colour - obviously though you'd apply a fading effect as you got further from the light (that part of the implementation I haven't done yet - but as I said it is relatively easy.)
How I did it:
I don't guarantee this is the most optimal method, but for those interested:
Essentially, this effect is achieved by using a lot of Java shape operations - the rendering of the light map is accelerated by using a VolatileImage.
When the light map is being generated, the render routine does the following:
Creates an Area object that contains a Rectangle that covers the
entirety of the screen. This area will contain your ambient
lighting.
It then iterates through the lights asking them what their
light-casting Area would be if there were no obstructions in the way.
It takes this area object and searches the world for Actors\Tiles
that are contained within that area that the light would be cast in.
For every tile that it finds that obstructs view in the light's casting area, it will calculate the difference in the light source's position and the obstruction's
position (essentially creating a vector that points AT the
obstruction from the light source - this is the direction you want to cast your shadow) This pointing vector (in world
space) needs to be translated to screen space.
Once that has been done, a perpendicular to that vector is taken and
normalized. This essentially gives you a line you can travel up or
down on by multiplying it by any given length to travel the given direction in. This vector is
perpendicular to the direction you want to cast your shadow over.
Almost done, you consturct a polygon that consists of four points.
The first two points are at the the base of the screen coordinate of
your obstruction's center point. To get the first point, you want to
travel up your perpendicular vector (calculated in 5) a quantity of
half your tile's height [ this is a relatively accurate
approximation though I think this part of the algorithm is slightly
incorrect - but it has no noticable decay on the visual effect] -
then ofcourse add to that the obstructions origin. To get the
second, you do the same but instead travel down.
The remainder of the two points are calculated exactly the same way -
only these points need to be projected outward in the direction of
your shadow's projection vector calculated in 4. - You can choose any large amount to project it outwards by - just as long as it reaches at least outside of you light's casting area (so if you just want to do it stupidly multiply your shadow projection vector by a factor of 10 and you should be safe)
From this polygon you just constructed, construct an area, and then
invoke the "intersect" method with your light's area as the first
argument - this will assure that your shadows area doesn't reach
outside of the bounds of the area that your light casts over.
Subtract from your light's casting the shadow area you constructed
above. At this point you now have two areas - the area where the
light casts unobstructed, and the area the light casts over
obstructed - if your Actors have a visibility obstruction factor
that you used to determine that a particular actor was obstructing
view - you also have the grade at which it obstructs the view that
you can apply later when you are drawing in the light effect (this will allow you to chose between a darker\brighter shade depending on how much light is being obstructed
Subtract from your ambient light area you constructed in (1) both
the light area, and the obstructed light area so you don't apply
the ambient light to areas where the lighting effect will take over
and render into
Now you need to merge your light map with your depth-buffered world's render routine
Now that you've rendered you're light map and it is contained inside of a volatile image, you need to throw it into your world's render routine and depth-sorting algorithm. Since the back-buffer and the light map are both volatileimages, rendering the light map over the world is relatively optimal.
You need to construct a polygon that is essentially a strip that contains what a vertical strip of your world tiles would be rendered into (look at my screen shot, you'll see an array of thin diagonal lines seperating these strips. These strips are what I am referring). You can than render parts of this light map strip by strip (render it over the strip after you've rendered the last tile in that strip since - obviously - the light map has to be applied over the map). You can use the same image-map just use that strip as a clip for Graphics - you will need to translate that strip polygon down per render of a strip.
Anyway, like I said I don't guarantee this is the most optimal way - but so far it is working fine for me.
The light map is applied p
In my game, the world is made of cubes, but the cubes are divided into 5 parts: a tetrahedron and 4 corners. Each type of block has two colors. This is what a block might look like if one corner was cut, although each corner/face may have different colors from the rest.
The problem is, on the tetrahedral faces, I want the edges between the triangles to be seamless. So I can't use textures. (I could, but they would need to be high-res, and if I want to animate the colors (for example on water) this is not an option).
I've found these approaches:
Drawing each triangle on each tetrahedral face, then each square on each cubical face (using a VBO and all that stuff)
Too many polys! Lag ensues. And this was only rendering the tetrahedrals.
Using a fragment shader on world geometry
The math is simple: for each axis, find if the point is less than 0.5 within the cube and xor the results. This determines which color to use. I got lag, but I think my code is bad.
3D textures on world geometry
This seems to be the best option given how perfectly it matches my situation, but I really don't know.
Using instanced geometry with any of the above
I'm not sure about this one; I've read instancing can be slow on large scales. I would need 31 meshes, or more if I want to optimize for skipping hidden surfaces (which is probably unnecessary anyways).
Using a geometry shader
I've read geometry shaders don't perform well on large scales.
Which of these options would be the most efficient? I think using 3d and 2d textures might be the best option, but if I get lag I want to be sure it's because I'm using bad code not an inefficient approach.
Edit: Here's my shader code
#version 150 core
in vec4 pass_Position;
in vec4 pass_Color1;
in vec4 pass_Color2;
out vec4 out_Color;
void main(void) {
if ((mod(abs(pass_Position.x),1f)<=0.5f)^^(mod(abs(pass_Position.y),1f)<=0.5f)^^(mod(abs(pass_Position.z),1f)<=0.5f)) out_Color = pass_Color1;
else out_Color = pass_Color2;
}
The problem is, on the tetrahedral faces, I want the edges between the triangles to be seamless. So I can't use textures. (I could, but they would need to be high-res, and if I want to animate the colors (for example on water) this is not an option).
That's not necessarily the case. Remember that OpenGL doesn't see whole objects, but just individual triangles. So when rendering that cut face, it's in no way different to just render its flat, "fleshless" counterpart.
Any hard edge on the inner tetrahedron doesn't suffer from a texture crease as the geometrical edge is much stronger. So what I'd do is to have a separate 2D planar texture space aligned with the tetrahedral surfaces, which is shared by all faces coplanar to this (on a side note: applying this you could generate the texture coordinates using a vertex shader from the vertex position).
That being said: Simple 2D flat textures will eventually hit some limitations. Since you're effectively implementing a variant of an implicit surface tesselator (with the scalar field creating the surface being binary valued) it makes sense to think about procedural volumetric texture generation in the fragment shader.
I am having some performance problems with OpenGL. I essentially want to create a grid of squares. I first tried to implement it where each square I would translate to where I want a square, then multiply the model and view matrix, pass it into the shader program and draw the square. I would do this for each square. After creating about 50 squares the frame rate would start to drop to less than what I desire.
I then tried a VBO method where I basically would generate a vertex buffer each time the squares change location. Frame rate increased dramatically with this approach, but I have too much latency when something changes because it has to regenerate all the vertex locations.
What I think I need is a matrix stack... I used opengl 1.1 before and would use push/pop. I don't really understand the concepts of what that was doing though and how to reproduce it. Does anyone know where a good example of a matrix stack is that I can use as an example? Or possibly just a good explanation for one?
You can check this tutorial, is basically doing the same you want to achieve, but with cubes instead of squares. It uses a VBO as well:
http://www.learnopengles.com/android-lesson-seven-an-introduction-to-vertex-buffer-objects-vbos/
About the matrices, in OpenGL ES 2.0 you don't have any matrix related functions anymore, but you can use the glmath library, which does the same (and much more):
http://glm.g-truc.net/
It's a header library, so you just need to copy it somewhere and include it where you need it.
I'm not sure if I completely understand your objective, but I guess you could copy the data of one square in the grapic card (using a VBO) and then repeatedly update the model matrix for every square.
The concept of a matrix stack makes sense if your squares have some kind of hierarchy between them (for instance, if one of them moves, the one to its left has to move accordingly).
You can imagine it as a skeleton made out of squares. If the shoulder moves, all the pieces in the arm will move as well (hands, fingers, and so on).
You can emulate that by using a matrix stack. You can create some kind of tree with all your squares, so that every square has a list of "descendants", which will apply the same transformation as the parent. then you can render recursively all the squares like that:
Apply transform to the root square(s)
Push the transform in a queue
Call the same render function for every child
Every child reads the matrix on the top of the queue, multiplies
it by its own transformation, push the new matrix on the queue and
calls the children
After that every child pops out the matrix they pushed before
Using the glmath is quite easy, you just need to create a queue (std:vector in this case) of matrices:
std::vector<glm::mat4> matrixStack;
And then for every child:
glm::mat4 modelMatrix = matrixStack.back();
glm::mat4 nodeTransform = /*apply your transform here*/
glm::mat4 new = modelMatrix * nodeTransform;
matrixStack.push_back(new);
/*Pass in the new matrix to the shader and call to glDrawArrays or whatever to render your square*/
for (every child) {
render();
}
matrixStack.pop_back();
For the drawing part, I guess you could bind the vertex array with the square vertices, and then update the model matrix in the shader for every child, before calling glDrawArrays.
I'm new to OpenGL. I'm using JOGL.
I would like to create a sky for my world that I can texture with clouds or stars. I'm not sure what the best way to do this is. My first instinct is to make a really big sphere with quadric orientation GLU_INSIDE, and texture that. Is there a better way?
A skybox is a pretty good way to go. You'll want to use a cube map for this. Basically, you render a cube around the camera and map a texture onto the inside of each face of the cube. I believe OpenGL may include this in its fixed function pipeline, but in case you're taking the shader approach (fixed function is deprecated anyway), you'll want to use cube map samplers (samplerCUBE in Cg, not sure about GLSL). When drawing the cube map, you also want to remove translation from the modelview matrix but keep the rotation (this causes the skybox to "follow" the camera but allows you to look around at different parts of the sky).
The best thing to do is actually draw the cube map after drawing all opaque objects. This may seem strange because by default the sky will block other objects, but you use the following trick (if using shaders) to avoid this: when writing the final output position in the vertex shader, instead of writing out .xyzw, write .xyww. This will force the sky to the far plane which causes it to be behind everything. The advantage to this is that there is absolutely 0 overdraw!
Yes.
Making a really big sphere has two major problems. First, you may encounter problems with clipping. The sky may disappear if it is outside of your far clipping distance. Additionally, objects that enter your sky box from a distance will visually pass through a very solid wall. Second, you are wasting a lot of polygons(and a lot of pain) for a very simple effect.
Most people actually use a small cube(Hence the name "Sky box"). You need to render the cube in the pre-pass with depth testing turned off. Thus, all objects will render on top of the cube regardless of their actual distance to you. Just make sure that the length of a side is greater than twice your near clipping distance, and you should be fine.
Spheres are nice to handle as they easily avoid distortions, corners etc. , which may be visible in some situations. Another possibility is a cylinder.
For a really high quality sky you can make a sky lighting simulation, setting the sphere colors depending on the time (=> sun position!) and direction, and add some clouds as 3D objects between the sky sphere and the view position.