Creating a 2D game with modern opengl - java

I've tried for some time to create a simple 2D Engine with modern opengl, with modern opengl I mean I'm not going to be using this:
glBegin(); glEnd(); glVertex2i(); glOrtho();
etc,
I'm using GLSL.
And I have run into some problems. Here's some questions
I'm trying to create a 2D coordinate system without using old methods like glOrtho() and glViewport(), I know i'm supposed to use a matrix here and there but how? How do i create a coordinate system?
How do I use texCoords in GLSL to load a texture in 2D space.
If someone could explain or point to a source on how to set the size of a Texture, right now I've to use a position from -1 to 1, but I want to say that the size is 16 * 16 for example.
Is there someway to print out errors from GLSL? When I have an error it's just prints out Couldn't find uniform. Because the location is == -1
How to use and create matrix to to move, scale and rotate in 2D space.
Can I use a Matrix2f instead of Matrix4f?
Please, if you can, post the code in Java Thank you

Disclaimer: I don't have much experience with modern OpenGL but I worked with recent DirectX'es. Most of general conceptions should be the same here.
How do i create a coordinate system?
Nowadays there is no fixed function pipeline so one have to do all coordinate transformations by hand inside a vertex shader. It consumes a vertex then produces one transformed to the screen space. In case of 2D graphics there's even no need to bother with projection matrices.
Consider the input stream consisting from 2D vectors. Thus rotation, scaling and translation implemented with 3D matrices. But output vectors should be 4D. Pseudocode:
input = getInputVertex();
transformed = view * world * vec3(input, 1.0f);
output = vec4(transformed, 1.0f);
Where 'world' is a world matrix of sprite and 'view' is a matrix of camera.

Related

Java OpenGL - Apply different color per side of glutSolidCube

For my Java OpenGL project I am trying to make a Rubik's Cube.
I've got all rotations calculated and working, but there's one thing I don't know how to do, namely give each side of the cube it's own color.
I used glRotatef and glTranslatef to position each of the 27 blocks, and glutSolidCube to draw each block.
How can I give each side of glutSolidCube a different color?
I've looked at textured cubes, but that seems hard as I don't know the (x, y, z) coordinates of each block, I only have the transformation matrix (rotation and translation).
What is the easiest way to do this?
This may not be possible directly: glutSolidCube does not generate color attributes (see the fghCube function in freeglut source code).
The simplest way would be to generate the geometry of the cube yourself. Generate 6 (faces) * 4 = 24 vertices total, with the expected positions, normals and an additional color attribute per vertex. Just like for the normal attribute, for each of the 8 distinct vertex positions on a cube, you should have 3 different colors (since the same vertex is shared by 3 faces but you need a different color per face).
Another way if you really insist on using glutSolidCube would be to assign the vertex color based on the vertex normal in the vertex shader. But maybe you are not using vertex shaders...

moving and rotating vertices with LWJGL

Is it possible to move or rotate single vertices or collections of vertices(for example display lists)instead of changing the coordinates of every single vertex with the LWJGL?
Maybe something like GL11.glTranslatef(...) but only for moving parts of the scene.
In addition J have no idea how to rotate something.
You can use OpenGL 2.0 with vertex shader to do this. But it's not so simple like glTranslatef.
Google: OpenGL 2.0 Vertex Transformation. It allows rotate, scale and move vertices.

Best way to render a non-cubical sandbox game?

In my game, the world is made of cubes, but the cubes are divided into 5 parts: a tetrahedron and 4 corners. Each type of block has two colors. This is what a block might look like if one corner was cut, although each corner/face may have different colors from the rest.
The problem is, on the tetrahedral faces, I want the edges between the triangles to be seamless. So I can't use textures. (I could, but they would need to be high-res, and if I want to animate the colors (for example on water) this is not an option).
I've found these approaches:
Drawing each triangle on each tetrahedral face, then each square on each cubical face (using a VBO and all that stuff)
Too many polys! Lag ensues. And this was only rendering the tetrahedrals.
Using a fragment shader on world geometry
The math is simple: for each axis, find if the point is less than 0.5 within the cube and xor the results. This determines which color to use. I got lag, but I think my code is bad.
3D textures on world geometry
This seems to be the best option given how perfectly it matches my situation, but I really don't know.
Using instanced geometry with any of the above
I'm not sure about this one; I've read instancing can be slow on large scales. I would need 31 meshes, or more if I want to optimize for skipping hidden surfaces (which is probably unnecessary anyways).
Using a geometry shader
I've read geometry shaders don't perform well on large scales.
Which of these options would be the most efficient? I think using 3d and 2d textures might be the best option, but if I get lag I want to be sure it's because I'm using bad code not an inefficient approach.
Edit: Here's my shader code
#version 150 core
in vec4 pass_Position;
in vec4 pass_Color1;
in vec4 pass_Color2;
out vec4 out_Color;
void main(void) {
if ((mod(abs(pass_Position.x),1f)<=0.5f)^^(mod(abs(pass_Position.y),1f)<=0.5f)^^(mod(abs(pass_Position.z),1f)<=0.5f)) out_Color = pass_Color1;
else out_Color = pass_Color2;
}
The problem is, on the tetrahedral faces, I want the edges between the triangles to be seamless. So I can't use textures. (I could, but they would need to be high-res, and if I want to animate the colors (for example on water) this is not an option).
That's not necessarily the case. Remember that OpenGL doesn't see whole objects, but just individual triangles. So when rendering that cut face, it's in no way different to just render its flat, "fleshless" counterpart.
Any hard edge on the inner tetrahedron doesn't suffer from a texture crease as the geometrical edge is much stronger. So what I'd do is to have a separate 2D planar texture space aligned with the tetrahedral surfaces, which is shared by all faces coplanar to this (on a side note: applying this you could generate the texture coordinates using a vertex shader from the vertex position).
That being said: Simple 2D flat textures will eventually hit some limitations. Since you're effectively implementing a variant of an implicit surface tesselator (with the scalar field creating the surface being binary valued) it makes sense to think about procedural volumetric texture generation in the fragment shader.

OpenGL - Pixel color at specific depth

I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.

OpenGL: Create a sky box?

I'm new to OpenGL. I'm using JOGL.
I would like to create a sky for my world that I can texture with clouds or stars. I'm not sure what the best way to do this is. My first instinct is to make a really big sphere with quadric orientation GLU_INSIDE, and texture that. Is there a better way?
A skybox is a pretty good way to go. You'll want to use a cube map for this. Basically, you render a cube around the camera and map a texture onto the inside of each face of the cube. I believe OpenGL may include this in its fixed function pipeline, but in case you're taking the shader approach (fixed function is deprecated anyway), you'll want to use cube map samplers (samplerCUBE in Cg, not sure about GLSL). When drawing the cube map, you also want to remove translation from the modelview matrix but keep the rotation (this causes the skybox to "follow" the camera but allows you to look around at different parts of the sky).
The best thing to do is actually draw the cube map after drawing all opaque objects. This may seem strange because by default the sky will block other objects, but you use the following trick (if using shaders) to avoid this: when writing the final output position in the vertex shader, instead of writing out .xyzw, write .xyww. This will force the sky to the far plane which causes it to be behind everything. The advantage to this is that there is absolutely 0 overdraw!
Yes.
Making a really big sphere has two major problems. First, you may encounter problems with clipping. The sky may disappear if it is outside of your far clipping distance. Additionally, objects that enter your sky box from a distance will visually pass through a very solid wall. Second, you are wasting a lot of polygons(and a lot of pain) for a very simple effect.
Most people actually use a small cube(Hence the name "Sky box"). You need to render the cube in the pre-pass with depth testing turned off. Thus, all objects will render on top of the cube regardless of their actual distance to you. Just make sure that the length of a side is greater than twice your near clipping distance, and you should be fine.
Spheres are nice to handle as they easily avoid distortions, corners etc. , which may be visible in some situations. Another possibility is a cylinder.
For a really high quality sky you can make a sky lighting simulation, setting the sphere colors depending on the time (=> sun position!) and direction, and add some clouds as 3D objects between the sky sphere and the view position.

Categories

Resources