Whats general convention when defining camera/objects z position - java

I'm making a switch to generate matrices myself in java, to pass into my opengl shaders.
I've created a method to generate the perspective matrix which works fine. But currently I have my objects being drawn at z position 0.0f. Which means when the app runs, I can only see 1 of my custom objects (square) really close up.
Should I be setting my camera z position to 0.0f (which is happening currently) .. or all my objects z position to 0.0f?
createPerspectiveProjection(60.0f, width / height, 0.1f, 100.0f);
/**
* #param fov
* #param aspect
* #param zNear
* #param zFar
* #return projectionMatrix
*/
private Matrix4f createPerspectiveProjection(float fov, float aspect, float zNear, float zFar){
Matrix4f mat = new Matrix4f();
float yScale = (float) (1 / (Math.tan(Math.toRadians(fov / 2))));
float xScale = yScale / aspect;
float frustrumLength = zFar - zNear;
mat.m00 = xScale;
mat.m11 = yScale;
mat.m22 = -((zFar + zNear) / frustrumLength);
mat.m23 = -1;
mat.m32 = -((2 * zFar * zNear) / frustrumLength);
mat.m33 = 0;
return mat;
}

The standard projection matrix you are using corresponds to a camera placed at the origin, and looking down the negative z-axis. The near and far values determine the range of negative z-values that are within the view frustum. With the values in the example, this means that z-values between -0.1f and -100.0f are visible. That's as long as they're within the pyramid defined by the field of view angle, of course.
How you place your camera and objects in world space is completely up to you. If you emulate a traditional OpenGL rendering pipeline, you'll have a model-view matrix that transforms your objects to place them in the view frustum described above. This means that visible objects should have a negative z-value after the model-view transformation is applied.
The absolutely simplest way of achieving this is to use the identity transformation (i.e. no transformation at all) for the model-view transformation, and place your objects around the negative z-axis.
However, it's often convenient to have your objects placed somewhere around the origin. One simple way of allowing this to work is to place the camera on the positive z-axis, and point it at the origin. The model-view matrix then becomes particularly simple. It's only a translation in the negative z-direction, to shift the camera from its position on the positive z-axis to the origin, matching how the projection matrix was set up.
For example, with your near/far values of 0.1/100.0, you could place the camera at (0.0, 0.0, 50.0) in world space. The view transformation is then a translation by (0.0, 0.0, -50.0). The z-range from 49.9 to -50.0 in world space is then the visible range, allowing you to place your objects around the origin.

Related

Rotating Coordinates (Java and Geometry)

I am working on a 2D java game engine using AWT canvas as a basis. Part of this game engine is that it needs to have hitboxes with collision. Not just the built in rectangles (tried that system already) but I need my own Hitbox class because I need more functionality. So I made one, supports circular and 4-sided polygon shaped hitboxes. The way the hitbox class is setup is that it uses four coordinate points to serve as the 4 corner vertices that connect to form a rectangle. Lines are draw connecting the points and these are the lines that are used to detect intersections with other hitboxes. But I now have a problem: rotation.
There are two possibilities for a box hitbox, it can just be four coordinate points, or it can be 4 coordinate points attached to a gameobject. The difference is that the former is just 4 coordinates based on 0,0 as the ordin while the attached to gameobject stores offsets in the coordinates rather than raw location data, so (-100,-100) for example represents the location of the host gameobject but 100 pixels to the left, and 100 pixels up.
Online I found a formula for rotating points about the origin. Since Gameobject based hitboxes were centered around a particular point, I figured that would be the best option to try it on. This code runs each tick to update a player character's hitbox
//creates a rectangle hitbox around this gameobject
int width = getWidth();
int height = getHeight();
Coordinate[] verts = new Coordinate[4]; //corners of hitbox. topLeft, topRight, bottomLeft, bottomRight
verts[0] = new Coordinate(-width / 2, -height / 2);
verts[1] = new Coordinate(width / 2, -height / 2);
verts[2] = new Coordinate(-width / 2, height / 2);
verts[3] = new Coordinate(width / 2, height / 2);
//now go through each coordinate and adjust it for rotation
for(Coordinate c : verts){
if(!name.startsWith("Player"))return; //this is here so only the player character is tested
double theta = Math.toRadians(rotation);
c.x = (int)(c.x*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(c.x*Math.sin(theta)+c.y*Math.cos(theta));
}
getHitbox().vertices = verts;
I appologize for poor video quality but this is what the results of the above are: https://www.youtube.com/watch?v=dF5k-Yb4hvE
All related classes are found here: https://github.com/joey101937/2DTemplate/tree/master/src/Framework
edit: The desired effect is for the box outline to follow the character in a circle while maintaining aspect ratio as seen here: https://www.youtube.com/watch?v=HlvXQrfazhA . The current system uses the code above, the effect of which can be seen above in the previous video link. How should I modify the four 2D coordinates to maintain relative aspect ratio throughout a rotation about a point?
current rotation system is the following:
x = x*Cos(theta) - y *Sin(theta)
y = x*Sin(theta) + y *Cos(theta)
where theta is degree of rotation in raidians
You made classic mistake:
c.x = (int)(c.x*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(c.x*Math.sin(theta)+c.y*Math.cos(theta));
In the second line you use modified value of c.x. Just remember tempx = c.x
before calculations and use it.
tempx = c.x;
c.x = (int)(tempx*Math.cos(theta)-c.y*Math.sin(theta));
c.y = (int)(tempx*Math.sin(theta)+c.y*Math.cos(theta));
Another issue: rounding coordinates after each rotation causes distortions and shrinking after some rotations. It would be wise to store coordinates in floats and round them only for output, or remember starting values and apply rotation by accumulated angle to them.

Transformations from pixels to NDC

Let's say that my screen is (800 * 600) and I have a Quad (2D) drawn with the following vertices positions using Triangle_Strip (in NDC) :
float[] vertices = {-0.2f,0.2f,-0.2f,-0.2f,0.2f,0.2f,0.2f,-0.2f};
And I set up my Transformation Matrix in this way :
Vector2f position = new Vector2f(0,0);
Vector2f size = new Vector2f(1.0f,1.0f);
Matrix4f tranMatrix = new Matrix4f();
tranMatrix.setIdentity();
Matrix4f.translate(position, tranMatrix, tranMatrix);
Matrix4f.scale(new Vector3f(size.x, size.y, 1f), tranMatrix, tranMatrix);
And my vertex Shader :
#version 150 core
in vec2 in_Position;
uniform mat4 transMatrix;
void main(void) {
gl_Position = transMatrix * vec4(in_Position,0,1.0);
}
My question is, which formula should I use to modify the transformations of my quad with coordinates (in Pixels) ?
For example :
set Scale : (50px, 50px) => Vector2f(width,height)
set Position : (100px, 100px) => Vector2f(x,y)
To better understand, I would create a function to convert my Pixels data to NDCs to send them next to the vertex shader. I was advised to use an Orthographic projection but I don't know how to create it correctly and as you can see in my vertex shader I don't use any projection matrix.
Here is a topic similar to mine but not very clear - Transform to NDC, calculate and transform back to worldspace
EDIT:
I created my orthographic projection matrix by following the formula but
nothing seems to appear, here is how I proceeded :
public static Matrix4f glOrtho(float left, float right, float bottom, float top, float near, float far){
final Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
matrix.m00 = 2.0f / (right - left);
matrix.m01 = 0;
matrix.m02 = 0;
matrix.m03 = 0;
matrix.m10 = 0;
matrix.m11 = 2.0f / (top - bottom);
matrix.m12 = 0;
matrix.m13 = 0;
matrix.m20 = 0;
matrix.m21 = 0;
matrix.m22 = -2.0f / (far - near);
matrix.m23 = 0;
matrix.m30 = -(right+left)/(right-left);
matrix.m31 = -(top+bottom)/(top-bottom);
matrix.m32 = -(far+near)/(far-near);
matrix.m33 = 1;
return matrix;
}
I then included my matrix in the vertex shader
#version 140
in vec2 position;
uniform mat4 projMatrix;
void main(void){
gl_Position = projMatrix * vec4(position,0.0,1.0);
}
What did I miss ?
New Answer
After clarifications in the comments, the question being asked can be summed up as:
How do I effectively transform a quad in terms of pixels for use in a GUI?
As mentioned in the original question, the simplest approach to this will be using an Orthographic Projection. What is an Orthographic Projection?
a method of projection in which an object is depicted or a surface mapped using parallel lines to project its shape onto a plane.
In practice, you may think of this as a 2D projection. Distance plays no role, and the OpenGL coordinates map to pixel coordinates. See this answer for a bit more information.
By using an Orthographic Projection instead of a Perspective Projection you can start thinking of all of your transformations in terms of pixels.
Instead of defining a quad as (25 x 25) world units in dimension, it is (25 x 25) pixels in dimension.
Or instead of translating by 50 world units along the world x-axis, you translate by 50 pixels along the screen x-axis (to the right).
So how do you create an Orthographic Projection?
First, they are usually defined using the following parameters:
left - X coordinate of the left vertical clipping plane
right - X coordinate of the right vertical clipping plane
bottom - Y coordinate of the bottom horizontal clipping plane
top - Y Coordinate of the top horizontal clipping plane
near - Near depth clipping plane
far - Far depth clipping plane
Remember, all units are in pixels. A typical Orthographic Projection would be defined as:
glOrtho(0.0, windowWidth, windowHeight, 0.0f, 0.0f, 1.0f);
Assuming you do not (or can not) make use of glOrtho (you have your own Matrix class or another reason), then you must calculate the Orthographic Projection matrix yourself.
The Orthographic Matrix is defined as:
2/(r-l) 0 0 -(r+l)/(r-l)
0 2/(t-b) 0 -(t+b)/(t-b)
0 0 -2/(f-n) -(f+n)/(f-n)
0 0 0 1
Source A, Source B
At this point I recommend using a pre-made mathematics library unless you are determined to use your own. One of the most common bug sources I see in practice are matrix-related and the less time you spend debugging matrices, the more time you have to focus on other more fun endeavors.
GLM is a widely-used and respected library that is built to model GLSL functionality. The GLM implementation of glOrtho can be seen here at line 100.
How to use an Orthographic Projection?
Orthographic projections are commonly used to render a GUI on top of your 3D scene. This can be done easily enough by using the following pattern:
Clear Buffers
Apply your Perspective Projection Matrix
Render your 3D objects
Apply your Orthographic Projection Matrix
Render your 2D/GUI objects
Swap Buffers
Old Answer
Note that this answered the wrong question. It assumed the question boiled down to "How do I convert from Screen Space to NDC Space?". It is left in case someone searching comes upon this question looking for that answer.
The goal is convert from Screen Space to NDC Space. So let's first define what those spaces are, and then we can create a conversion.
Normalized Device Coordinates
NDC space is simply the result of performing perspective division on our vertices in clip space.
clip.xyz /= clip.w
Where clip is the coordinate in clip space.
What this does is place all of our un-clipped vertices into a unit cube (on the range of [-1, 1] on all axis), with the screen center at (0, 0, 0). Any vertices that are clipped (lie outside the view frustum) are not within this unit cube and are tossed away by the GPU.
In OpenGL this step is done automatically as part of Primitive Assembly (D3D11 does this in the Rasterizer Stage).
Screen Coordinates
Screen coordinates are simply calculated by expanding the normalized coordinates to the confines of your viewport.
screen.x = ((view.w * 0.5) * ndc.x) + ((w * 0.5) + view.x)
screen.y = ((view.h * 0.5) * ndc.y) + ((h * 0.5) + view.y)
screen.z = (((view.f - view.n) * 0.5) * ndc.z) + ((view.f + view.n) * 0.5)
Where,
screen is the coordinate in screen-space
ndc is the coordinate in normalized-space
view.x is the viewport x origin
view.y is the viewport y origin
view.w is the viewport width
view.h is the viewport height
view.f is the viewport far
view.n is the viewport near
Converting from Screen to NDC
As we have the conversion from NDC to Screen above, it is easy to calculate the reverse.
ndc.x = ((2.0 * screen.x) - (2.0 * x)) / w) - 1.0
ndc.y = ((2.0 * screen.y) - (2.0 * y)) / h) - 1.0
ndc.z = ((2.0 * screen.z) - f - n) / (f - n)) - 1.0
Example:
viewport (w, h, n, f) = (800, 600, 1, 1000)
screen.xyz = (400, 300, 200)
ndc.xyz = (0.0, 0.0, -0.599)
screen.xyz = (575, 100, 1)
ndc.xyz = (0.4375, -0.666, -0.998)
Further Reading
For more information on all of the transform spaces, read OpenGL Transformation.
Edit for Comment
In the comment on the original question, Bo specifies screen-space origin as top-left.
For OpenGL, the viewport origin (and thus screen-space origin) lies at the bottom-left. See glViewport.
If your pixel coordinates are truly top-left origin then that needs to be taken into account when transforming screen.y to ndc.y.
ndc.y = 1.0 - ((2.0 * screen.y) - (2.0 * y)) / h)
This is needed if you are transforming, say, a coordinate of a mouse-click on screen/gui into NDC space (as part of a full transform to world space).
NDC coordinates are transformed to screen (i.e. window) coordinates using glViewport. This function (you must use t in your app) defines a portion of the window by an origin and a size.
The formulas used can be seen at https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glViewport.xml
(x,y) are the origin, normally (0,0) the bottom left corner of the window.
While you can derivate the inverse formulas on your own, here you have them: https://www.khronos.org/opengl/wiki/Compute_eye_space_from_window_space#From_window_to_ndc
If I understand the question, you're trying to get screen space coords (the ones that define the size of your screen) to the -1 to 1 coords. If yes then it's quite simple. The equation is:
((coords_of_NDC_space / width_or_height_of_screen) * 2) - 1
This would work because for example a screen of 800 × 600:
  800 / 800 = 1
  1 * 2 = 2
  2 - 1 = 1
and to check for a coordinate from half the screen on the height:
  300 / 600 = 0.5
  0.5 * 2 = 1
  1 - 1 = 0 (NDC is from -1 to 1 so 0 is middle)

Why doesn't simply scaling things up in LibGDX and Box2D work correctly?

I'm trying to get rid of having to scale all the coordinates on my sprites when using Box2D and LibGDX.
Here are the settings for my viewport and physics world:
// Used to create an Extend Viewport
public static final int MIN_WIDTH = 480;
public static final int MIN_HEIGHT = 800;
// Used to scale all sprite's coordinates
public static final float PIXELS_TO_METERS = 100f;
// Used with physics world.
public static final float GRAVITY = -9.8f;
public static final float IMPULSE = 0.15f;
world.setGravity(new Vector2(0f, GRAVITY));
When I apply a linear impulse to my character (when the user taps the screen) everything works fine:
body.setLinearVelocity(0f, 0f);
body.applyLinearImpulse(0, IMPULSE, body.getPosition().x, body.getPosition().y, true);
The body has a density of 0f, but changing this to 1f or even 100f doesn't seem to have any real effect.
This means that I have to scale all the sprite's locations in the draw method by PIXELS_TO_METERS. I figured (perhaps incorrectly) that I could simply scale GRAVITY and IMPULSE by PIXELS_TO_METERS and have it work exactly the same. This doesn't seem to be the case. Gravity seems really small, and applying the impulse barely has any effect at all.
// Used to scale all sprite's coordinates
public static final float PIXELS_TO_METERS = 1f;
// Used with physics world.
public static final float GRAVITY = -9.8f * 100;
public static final float IMPULSE = 0.15f * 100;
So:
1) why doesn't simply scaling up all the values make it work the same?
2) Is there a better way to do this?
It looks like you're over complicating your design by using some imaginary pixel units (i doubt it are actual pixels you're referring to). I'd advice you to use meaningful units instead, for example meters, and stick to it. Thus, use meters for all coordinates (including your virtual viewport). So, practically modify you code to look like this:
// Used to create an Extend Viewport
public static final float MIN_WIDTH = 4.8f; // at least 4.8 meters in width is visible
public static final float MIN_HEIGHT = 8.0f; // at least 8 meter in height is visible
This completely removes the need to scale meter to your imaginary pixel units. The actual scaling from your units (virtual viewport size) to the screen (values between -1 and +1) is done by the camera. You should not have to think about scaling units in your design.
Make sure to remove your PIXELS_TO_METERS constant (don't set it to 1f, it is only complicating your code at no gain) and make sure you're not using imaginary pixels at all in your code. The latter includes all sprites that you create without explicitly specifying its size in meters.
It is still possible to "scale" your units (in your game logic) compared to SI units, because of valid reasons. For example, when creating a space game, you might find yourself using very large numbers when using meters. Typically you'd want to keep the values around 1f to avoid floating point errors. In such case it can be useful to use e.g. dekameters (x10), hectometers (x100) or kilometers (x1000) instead. If you do this, make sure to be consistent. It might help to add the units in comments so you don't forget to scale properly (e.g. GRAVITY = -0.0098f; // kilometer per second per second).
I have implemented as this:
// in declaration
float PIXELS_TO_METERS = 32; // in my case: 1m = 32 pixels
Matrix4 projection = new Matrix4();
// in creation
viewport = new FitViewport(
Application.width
, Application.height
, Application.camera);
viewport.apply();
// set vieport dimensions
camera.setToOrtho(false, gameWidth, gameHeight);
// in render
projection.set(batch.getProjectionMatrix());
projection.scl(PIXELS_TO_METERS);
box2dDebugRenderer.render(world, projection);
// in player body creation
BodyDef bodyDef = new BodyDef();
bodyDef.type = BodyDef.BodyType.DynamicBody;
bodyDef.position.x = getPosition().x / PIXELS_TO_METERS;
bodyDef.position.y = getPosition().y / PIXELS_TO_METERS;
CircleShape shape = new CircleShape();
shape.setRadius((getBounds().width * 0.5f) / PIXELS_TO_METERS);
// in player update
setPosition(
body.getPosition().x * PIXELS_TO_METERS - playerWidth,
body.getPosition().y * PIXELS_TO_METERS - playerHeight);
So to set pixels to meters in box2d methods you have to divide pixel-positions by PIXELS_TO_METERS and to set meters to pixels in player position you have to multiply box2d values by PIXELS_TO_METERS.
Set your PIXELS_TO_METERS correctly to how much pixels in your screens match to 1 meter.
Good luck.

How to work out if a point is behind the field of view

So I'm rendering points in 3D space. To find their X position on the screen, I'm using this math:
double sin = Math.sin(viewPointRotX);
double cos = Math.cos(viewPointRotX);
double xx = x - viewPointX;
double zz = z - viewPointZ;
double rotx = xx * cos - zz * sin;
double rotz = zz * cos + xx * sin;
double xpix = (rotx / rotz * height + width / 2);
I'm doing a similar process for Y.
This works fine, but points can render as if they were in front of the camera when they are actually behind it.
How can I work out using the data I've got whether a given point is in front of or behind the camera?
We can tell if a point is in front or behind a camera by comparing coordinates with a little 3D coordinate geometry.
Consider a very simple example: The camera is located at (0,0,0) and pointed straight up. That would mean every point with a positive Z coordinate is "in front" of the camera. Now, we could get more complicated and account for the fact that the field of view of a camera is really a cone with a particular angle.
For instance, if the camera has a 90 degree field of view (45 degrees in both directions of the way it is facing), then we can handle this with some linear math. In 2D space, again with camera at the origin facing up, the field of view would be bound by the lines y = x and y = -x (or all points within y = |x|).
For 3D space, take the line y = x and spin it around the z-axis. We now have the cone:
z = x*x + y*y, so if
if(z < x*x + y*y && z>0)
then the point is in front of the camera and within the field of view.
If the camera is not at the origin, we can just translate the coordinate system so that it is and then run the calculation above. If the viewing angle is not 90 degrees, then we need to trace out some line y = mx (where m = Math.tan(viewAngle/2) ) and spin this line about the z-axis to get the light cone.
The way it looks like you are doing this is transforming the point coordinates so that they are aligned with and relative to the view.
If you do this the right way then you can just check that the rotated z-value is positive and greater than the distance from the focal-point to the "lens".
Find the view direction vector:
V = (cos(a)sin(b), sin(a)sin(b), cos(b)), where a and b are the camera's rotation angles.
Project the offset vector (xx, yy, zz) onto the view direction vector, and find the magnitude, giving the distance along the camera's view axis of the point:
distance = xx * cos(a)sin(b) + yy * sin(a)sin(b) + zz * cos(b)
Now just check that distance > focalLength.
This should work but you have to be careful to set everything up right. You might have to use a different calculation to find the view direction vector depending on how you are representing the camera's orientation.

LWJGL first person camera using jBullet

I've got a camera set up, and I can move with WASD and rotate the view with the mouse. But now comes the problem: I want to add physics to the camera/player, so that it "interacts" with my other jBullet objects. How do I do that? I thought about creating a RigidBody for the camera and storing the position there, so that jBullet can apply its physics to the camera. Then, when I need to change something (the position), I could simply change it in the RigidBody. But I didn't find any methods for editing the position.
Can you push me in the right direction or maybe give me an example source code?
I was asking the same question myself a few days ago. My solution was as Sierox said. To create a RigidBody of BoxShape and add that to the DynaicsWorld. To move the camera arund, apply force to its rigidbody. I have damping set to .999 for linear and 1 for angular to stop the camera when no force is applied, i.e. the player stops pressing the button.
I also use body.setAngularFactor(0); so the box isn't tumbling all over the place. Also set the mass really low as not to interfere too much with other objects, but still be able to jump on then and run into them, and otherwise be affected by them.
Remember to convert your x,y, and z coordinates to cartesian a plane so you move in the direction of the camera. i.e.
protected void setCartesian(){//set xyz to a standard plane
yrotrad = (float) (yrot / 180 * Math.PI);
xrotrad = (float) (xrot / 180 * Math.PI);
float pd = (float) (Math.PI/180);
x = (float) (-Math.cos(xrot*pd)*Math.sin(yrot*pd));
z = (float) (-Math.cos(xrot*pd)*Math.cos(yrot*pd));
//y = (float) Math.sin(xrot*pd);
}//..
public void forward(){// move forward from position in direction of camera
setCartesian();
x += (Math.sin(yrotrad))*spd;
z -= (Math.cos(yrotrad))*spd;
//y -= (Math.sin(xrotrad))*spd;
body.applyForce(new Vector3f(x,0,z),getThrow());
}//..
public Vector3f getThrow(){// get relative position of the camera
float nx=x,ny=y,nz=z;
float xrotrad, yrotrad;
yrotrad = (float) (yrot / 180 * Math.PI);
xrotrad = (float) (xrot / 180 * Math.PI);
nx += (Math.sin(yrotrad))*2;
nz -= (Math.cos(yrotrad))*2;
ny -= (Math.sin(xrotrad))*2;
return new Vector3f(nx,ny,nz);
}//..
to jump just use body.setLinearVelocity(new Vector3f(0,jumpHt,0)); and set jumpHt to whatever velocity you wish.
I use getThrow to return a vector for other objects i may be "throwing" on screen or carrying. I hope I answered your question and didn't throw in too much non-essential information.I'll try and find the source that gave me this idea. I believe it was on the Bullet forums.
------- EDIT ------
Sorry to have left that part out
once you have the rigid body functioning properly you just have to get it's coordinates and apply that to your camera for example:
float mat[] = new float[16];
Transform t = new Transform();
t = body.getWorldTransform(t);
t.origin.get(mat);
x = mat[0];
y = mat[1];
z = mat[2];
gl.glRotatef(xrot, 1, 0, 0); //rotate our camera on teh x-axis (left and right)
gl.glRotatef(yrot, 0, 1, 0); //rotate our camera on the y-axis (up and down)
gl.glTranslatef(-x, -y, -z); //translate the screen to the position of our camera
In my case I'm using OpenGL for graphics. xrot and yrot represent the pitch and yaw of your camera. the code above gets the world transform in the form of a matrix and for the purposes of the camera you need only to pull the x,y, and z coordinates then apply the transform.
from here, to move the camera, you can set the linear velocity of the rigid body to move the camera or apply force.
Before you read this answer I would like to mention that I have a problem with the solution stated in my answer. You can follow my question about that problem so that you can have the solution too if you use this answer.
So. First, you need to create a new BoxShape:
CollisionShape cameraHolder = new BoxShape(SIZE OF CAMERAHOLDER);
And add it to your world so that it interacts with all the other objects. Now you need to change all the methods about camera movement (not rotation) so that the methods move your cameraHolder but not your camera. Then set the position of your Camera to the position of the cameraHolder.
Again, if you have a problem where you can't move properly, you can check my question and wait for an answer. You also can find a better way of doing this.
If you have problems or did not understand something about the answer, please state it as a comment.

Categories

Resources