So I'm currently working on some FPS game programming in OpenGL (JOGL, more specifically) just for fun and I wanted to know what would be the recommended way to create an FPS-like camera?
At the moment I basically have a vector for the direction the player is facing, which will be added to the current player position upon pressing the "w" or forward key. The negative of that vector is of course used for the "s" or backward key. For "a", left, and "d", right I use the normal of the direction vector. (I am aware that this would let the player fly, but that is not a problem at the moment)
Upon moving the mouse, the direction vector will be rotated using trigonometry and matrices. All vectors are, of course, normalized for easy speed control.
Is this the common and/or good way or is there an easier/better way?
The way I have always seen it done is using two angles, yaw and pitch. The two axes of mouse movement correspond to changes in these angles.
You can calculate the forward vector easily with a spherical-to-rectangular coordinate transformation. (pitch=latitude=φ, yaw=longitude=θ)
You can use a fixed up vector (say (0,0,1)) but this means you can't look directly upwards or downwards. (Most games solve this by allowing you to look no steeper than 89.999 degrees.)
The right vector is then the cross product of the forward and up vectors. It will always be parallel to the ground plane since the up vector is always perpendicular to the ground plane.
Left/right strafe keys then use the +/-right vector. For a forward vector parallel to the ground plane, you can take the cross product of the right and the up vectors.
As for the GL part, you can simply use gluLookAt() using the player's origin, the origin plus the forward vector and the up vector.
Oh and please, please add an "invert mouse" option.
Edit: Here's an alternative solution which gets rid of the 89.9 problem, asked in another question, which involves building the right vector first (with no pitch information) and then forward and up.
Yes, thats essentially the way I have always seen it done.
Yeah, but in the end you will want to add various other attributes to the camera. To spell it n00b: keep it tidy if you want to mimic Quake or CS. In the end might have bobing, FoV, motion filtering, network lag suspension and more.
Cameras are actually one of the more difficult parts to make in a good game. That's why developers usually are content with a seriously dull, fixed 1st/3rd person ditto.
You could use Quaternions for your camera rotation. Although I have not tried it myself, they are useful for avoiding gimbal lock.
Related
I read this article http://www.iforce2d.net/b2dtut/ghost-vertices which explained a solution to my box2d bodies getting stuck at the intersections of multiple small fixtures supposed to be making up a platform and it says to use EdgeShapes to use the ghost vertices but after rereading a few times I am still very confused as to how to apply this ghost vertices method of solving my problem.
Ghost vertices are automatically calculated in libgdx so you don't get stuck in the ground. I had the same problem. Instead of using rectangles use EdgeShape and put in the vertices instead and you'll be fine!
As much as i understood, ghost vertices are ignored by the collission detection, but treated like normal edges by the collission response.
So if you collide with the "main-edge", the calculation of the collission response starts. Here, there is no "main-edge" anymore. Instead the ghost-vertex (the one, closer to the collision point), forms a new, continuous shape, together with the "main-edge".
So i guess, the ghost vertices can be like the adjacent edges, simmulating a continuous plattform.
This solution actually should solve the problem, while other solutions are only some kind of workarround, which in many cases are enough.
You could, for example, try to "cut" the edges or use a circle shape if possible. In some situations it might be enough.
Im making small 2D game and i would like how should, theoretically work jumping and standing on objects.
For jumping should there be some kind of gravity?Should i use collision detection?
Good example of what i want to undestand is jumping in Mario.
I think for jumping it would be best to create some simple model which use "game time". I would personally create some physical model based on gravity, but you can go for whatever you want.
If you want your character to "stand" on object, you will have to create some collision detection. Good start is to approximate object by one or more circles (or lines) compute collision of them and in case the approximation collides determine the final state by some more precise method.
In terms of should there be gravity and collision detection - personally for such a thing I'd say yes and yes! It doesn't have to be complicated but once you have those things in place the pseudocode for the "main loop" becomes relatively simple:
if not colliding with object underneath, apply gravity
if user presses jump key and not colliding with surface //i.e. if we're in the air already, don't jump again
apply upward velocity
That is of course oversimplified and there are other corner cases to deal with (like making sure when you're coming down after jumping, you don't end up embedded or potentially going through the floor.
You might want to take a look at Greenfoot which handles a lot of things like all the Java2d stuff, collision detection etc. for you. Even if you don't stick with it it'd be a good platform for building a prototype or playing around with the ideas talked about.
Standing on objects implies collision detection, you can approximate the characters and the environment objects with primitive shapes (rectangles, circles etc.).
To check if to shapes are colliding, check axis one by one (X and Y axis in your case). Here is an explanation of the "separating axis theorem" (Section 1).
Yesterday I came across Craig Reynolds' Boids, and subsequently figured that I'd give implementing a simple 2D version in Java a go.
I've put together a fairly basic setup based closely on Conrad Parker's notes.
However, I'm getting some rather bizarre (in my opinion) behaviour. Currently, my boids move reasonably quickly into a rough grid or lattice, and proceed to twitch on the spot. By that I mean they move around a little and rotate very frequently.
Currently, I have implemented:
Alignment
Cohesion
Separation
Velocity limiting
Initially, my boids are randomly distributed across the screen area (slightly different to Parker's method), and their velocities are all directed towards the centre of the screen area (note that randomly initialised velocities give the same result). Changing the velocity limit value only changes how quickly the boids move into this pattern, not formation of the pattern.
As I see it, this could be:
A consequence of the parameters I'm using (right now my code is as described in Parker's pseudocode; I have not yet tried areas of influence defined by an angle and a radius as described by Reynolds.)
Something I need to implement but am not aware of.
Something I am doing wrong.
The expected behaviour would be something more along the lines of a two dimensional version of what happens in the applet on Reynolds' boids page, although right now I haven't implemented any way to keep the boids on screen.
Has anyone encountered this before? Any ideas about the cause and/or how to fix it? I can post a .gif of the behaviour in question if it helps.
Perhaps your weighting for the separation rule is too strong, causing all the boids to move as far away from all neighboring boids as they can. There are various constants in my pseudocode which act as weights: /100 in rule 1 and /8 in rule 3 (and an implicit *1 in rule 2); these can be tweaked, which is often useful for modelling different behaviors such as closely-swarming insects or gliding birds.
Also the arbitrary |distance| < 100 in the separation rule should be modified to match the units of your simulation; this rule should only apply to boids within close proximity, basically to avoid collisions.
Have fun!
If they see everyone, they will all try to move with average velocity. If they see only some there can be some separated groups.
And if they are randomly distributed, it will be close to zero.
If you limit them by rectangle and either repulse them from walls or teleport them to other side when they got close) and have too high separation, they will be pushed from walls (from walls itself or from other who just were teleported, who will then be pushed to other side (and push and be pushed again)).
So try tighter cohesion, limited sight, more space and distribute them clustered (pick random point and place multiple of them small random distance from there), not uniformly or normaly.
I encountered this problem as well. I solved it by making sure that the method for updating each boid's velocity added the new velocity onto the old, instead of resetting it. Essentially, what's happening is this: The boids are trying to move away from each other but can't accelerate (because their velocities are being reset instead of increasing, as they should), thus the "twitching". Your method for updating velocities should look like
def set_velocity(self, dxdy):
self.velocity = (self.velocity[0] + dxdy[0], self.velocity[1] + dxdy[1])
where velocity and dxdy are 2-tuples.
I wonder if you have a problem with collision rectangles. If you implemented something based on overlapping rectangles (like, say, this), you can end up with the behaviour you describe when two rectangles are close enough that any movement causes them to intersect. (Or even worse if one rectangle can end up totally inside another.)
One solution to this problem is to make sure each boid only looks in a forwards direction. Then you avoid the situation where A cannot move because B is too close in front, but B cannot move because A is too close behind.
A quick check is to actually paint all of your collision rectangles and colour any intersecting ones a different colour. It often gives a clue as to the cause of the stopping and twitching.
I'm new to OpenGL. I'm using JOGL.
I would like to create a sky for my world that I can texture with clouds or stars. I'm not sure what the best way to do this is. My first instinct is to make a really big sphere with quadric orientation GLU_INSIDE, and texture that. Is there a better way?
A skybox is a pretty good way to go. You'll want to use a cube map for this. Basically, you render a cube around the camera and map a texture onto the inside of each face of the cube. I believe OpenGL may include this in its fixed function pipeline, but in case you're taking the shader approach (fixed function is deprecated anyway), you'll want to use cube map samplers (samplerCUBE in Cg, not sure about GLSL). When drawing the cube map, you also want to remove translation from the modelview matrix but keep the rotation (this causes the skybox to "follow" the camera but allows you to look around at different parts of the sky).
The best thing to do is actually draw the cube map after drawing all opaque objects. This may seem strange because by default the sky will block other objects, but you use the following trick (if using shaders) to avoid this: when writing the final output position in the vertex shader, instead of writing out .xyzw, write .xyww. This will force the sky to the far plane which causes it to be behind everything. The advantage to this is that there is absolutely 0 overdraw!
Yes.
Making a really big sphere has two major problems. First, you may encounter problems with clipping. The sky may disappear if it is outside of your far clipping distance. Additionally, objects that enter your sky box from a distance will visually pass through a very solid wall. Second, you are wasting a lot of polygons(and a lot of pain) for a very simple effect.
Most people actually use a small cube(Hence the name "Sky box"). You need to render the cube in the pre-pass with depth testing turned off. Thus, all objects will render on top of the cube regardless of their actual distance to you. Just make sure that the length of a side is greater than twice your near clipping distance, and you should be fine.
Spheres are nice to handle as they easily avoid distortions, corners etc. , which may be visible in some situations. Another possibility is a cylinder.
For a really high quality sky you can make a sky lighting simulation, setting the sphere colors depending on the time (=> sun position!) and direction, and add some clouds as 3D objects between the sky sphere and the view position.
I'm currently writing an application that actually acts as a "cut" tool for 3D meshes. Well, I had some problems with it now which I am clueless on how to solve, since it is my first application.
I have loaded a model from an object file onto the canvas, then on the same canvas, I use the mouse drag event to draw lines to define the cutting point.
Let us say I want to cut a ball into half and I draw the line in the middle. How do I detect the vertices of the ball under the line.
Secondly, if I rotate/translate the ball, would all the the vertices information change?
Think of what you'd do in the real world: You can't cut a ball with a line, you must use a knife (a line has no volume). To cut the ball, you must move the knife through the ball.
So what you're looking after is a plane, not a line. To get such a plane, you must use some 3D math. What you have is the canvas orientation and the "side view" of the plane (which looks like a line).
So the plane you're looking for is perpendicular to the canvas. A simple way to get such a plane is to take the canvas orientation and create a plane which has the same orientation and then rotate the plane around the line by 90°.
After that, you can visit all edges of your model and determine on which side of the plane they are. For this, determine on which side of the plane the end points of the edge are. Use the cross product. If they are on the same side (both results of the cross products will have the same sign), you can ignore the edge. Otherwise, you need to determine the intersection point of the edge and plane. Create new edges and connect them accordingly.
See this page for some background on the math. But you should find some helper methods for all this in your opengl library.
if I rotate / translate the ball, would all the the vertices information change
Of course.
It's not going to be that easy.
I assume the line you are drawing induces a plane which then cuts the sphere.
To do so, you have to calculate the intersecting area of the sphere and the plane.
This is not a trivial task and I suggest using an existing framework for this or if you really want to do this yourself, read about basic intersection problems to get a feeling for this kind of problem. This paper offers a good introduction to various intersection tests.
In general boundary represended volumes, as in your case, are difficult to handle when it comes to more advanced manipulations. Cutting a sphere in half is easy compared to burring a small hole into it. Sometimes it's better to use a volume representation, like tetrahedral meshes or CSG.
Regarding your second question, you shouldn't rotate or translate the sphere, rotate and translate the camera.