I'm trying to have my camera move in the way games do in first person. I have shapes drawn and oriented to look like a hallway, so I need the camera to move forward as if your moving through the hallway. What lines of code should I use, and where should I put them?
You should see the things from the opposite side, you don't move the camera. Instead you move the world so that its projection changes according to the camera (which doesn't really exist) position.
This is usually done by having a projection matrix which embeds the current camera position and orientation and this matrix is used inside your shaders, after having applied the model matrix (remember it's not symmetric).
Take a look a this good tutorial to get the necessary knowledge. Basically everything reduces to:
gl_Position = camera * model * vec4(vertex, 1);
Related
I have two simple questions:
First question:
Given is an object in OpenGL in an android application.
I want to apply phyics on the object like Gravtiy.
For this I've implemented a Public PositionVector {x,y,z,1.0f};
In the Object Object. So for example: Sphere.PositionVector = {0f,0f,0f,1f}
for the center of screen.
When I do the movement now, I have a modelMatrix and to update the Position should I multiply the modelMatrix with the PositionVector? But then I will still get 0,0,0,1.
mulitplyMV(tempVector,0,modelMatrix,0,PositionVector,0);
Where is the mistake? Or: What is the usual way to do this?
My goal is to have always the current positionvector of the Sphere.
Second question:
Does the shader have to do anything with the physics? Or may I calculate the gravtiy and the resulting vectors in the javaCode and apply then a translateMatrix to the modelMatrix?
Greetings,
Phil
So the first thing that really jumped out at me, was the question about movement. Normally if the movement was only for a visual effect, multiplying your position with a model matrix would be fine, but obviously you'll be wanting to handle collisions and that sort of thing as well.
What I normally do, is I keep track of each objects position, and update that value whenever it moves. When I draw my object, I first do a pass to leave out all objects that aren't in view and then only draw what my camera is seeing. So in this case, you'll only be drawing what is actually visible on the phone's screen at the moment.
So if you have a view matrix for your camera, to keep it simple for now you can leave out view culling, but you'll multiply your objects current position with the model matrix(in this case it is just an identity so it doesn't have any impact), and additionally with the view matrix and projection matrix. So its something like vec3 viewSpacePosition = projectionMatrix * viewMatrix * modelMatrix * position; (assuming I'm remembering that one correctly.)
Your physics calculations can be done either in your shaders or in your java code, though I recommend doing it on the java side while still getting the hang of things. OpenCL and transform feedback buffers can be quite tricky when you're still learning how some of the stuff works.
For the physics portion of your code, I would highly recommend taking a look an Glen Fiedler's series on game physics, you can find it here.
If you have any more questions, or if there is something you're still uncertain of let me know. :) Good luck!
I'm implementing Warcraft/Age of Empires-style "Fog of War" by writing a Filter class and the appropriate JME material definition with vertex and fragment shaders.
I was able to figure that out very easily, I can now tint the entire screen for example.
But I'm now stuck again in computing where a given fragment is located in the world.
So, how can this be done?
Why I need this, is basically I have a texture (32x32) with which to darken the world at particular places based on the alpha channel of the texture.
0,0 in the texture would correspond to 0,0,0 in the world. Given a "world map or terrain" of size 100,100.
This doesn't answer the question on how to get the world position of a fragment in a shader used in Filters, but anyway:
It was explained to me that it would be wiser to implement fog of war for all the shaders used by objects in the game, becaues it's easier and a lot more easily extendable (one can toggle the FOW effect on individual objects or materials).
The answer is to add a new varying vec2 called "worldPos" for example and set it to worldPos = g_WorldMatrix * vec4(inPosition, 1.0); in the vertex shader.
That's all. This does not work for Filters though.
I'm currently developing a 2D RPG with LWJGL, and am still in the engine stage of development. I've got a lot of the tech I want created, but one of my big problems is fixing the camera on the player. All the solutions I've seen involve moving the world and keeping the player still, which can work, but it seems apparent that this can cause some calculation issues if not closely monitored. Normally, I'd write a system where I wouldn't have to worry about it, but I refuse, because I eventually intend on adding multiplayer capability, where a moving world would be unplayable.
Is there a way to affix the camera to an object or point that can move WITHOUT using translate to move the world around? Also, I'd like to avoid Slick if possible. That would require me to rework much of my game engine as it currently stands.
Whenever you are going to project the 3d viewport onto a 2d screen you need to move everything according to the point of view of the observer (the so called camera or view).
I guess you can't escape from this. What you usually do is having a Camera object which holds position and rotation that is used to build the view matrix which is passed to the vertices of your scene through a uniform to the shaders. Passing transformation matrices to shaders is the normality so you shouldn't feel burdened by it. You can always premultiply it with the perspective matrix.
You must move the whole world to match the position of your camera just because you need to transform everything in your scene as it is seen from that point of view, otherwise how could you then project it on your screen? There is no "move the camera, keep the world still" concept.
Move the world visually, it's how every other RPG does it. Don't move the actual world's location though.
Draw everything but the ui normally, than translate it all according to the players position (i.e. glTranslate2f(-player.x,-player.y)). This is all done in the render method. On networked multiplayer, the viewport is done to that specific player (i.e. Bob's screen is translated based off Bob's position, Jane's is translated based off Jane's position). Should you instead want single-screen multiplayer, you will probably have to use mutliple framebuffers (one per player), and use them as viewports.
I want to implement a TPP Camera for my project, but something is not working + i don't know if i use the right concept.
Should I rotate the whole scene model-view matrices except my main model, which will be centered on the screen or rotate the lookAt camera?
Other thing is how to make the model move in given direction after rotating? (I think moving the whole scene makes it easier?) + how to add collision detection to it?
Collision detection is nothing to do with openGL you use you game state variables to work that out you can manage it in the same loop as the game where you do user input and display.
You should use a LookAtMatrix for the third person camera you will have the eye component behind the player and the at somewhere infront. Persective can be implemented by using a perspective matrix.
So the matrix multiplication will look like.
PerspectiveMatrix * LookAtMatrix * worldSpacePosition
Here is a good answer from gamedev explaining a lookatmatrix, most OpenGL / Computer Graphics books will also cover this.
Are you working with the new or old pipeline model?
I'm making a minecraft-inspired game through Java LWJGL, which is heavy into development already. However, I am not quite sure what method I would use to pick/highlight the nearest block in the exact center of the player's view frustum.
I am already storing frustum and positional data, which I could use.
I had a vague idea about using raycasting, but this seems to be unrelated based on what people have done with raycasting.
So which function or test would I use to determine this?
Raycasting will definitively work. You need to create a ray from the orientation of your camera and its position.
If your camera rotation matrix has no scale, the axis is the third column ( the z-axis ). Now depending on your convention, z axis may point to screen or to the world