Getting distance between the camera and an exact object - java

When we need to get the distance between camera ant a 3D object on witch we are pointing we use GL11.GL_DEPTH_COMPONENT. The problem is that i want to get the distance between the camera and an object behind the object closest to the camera. Is it possible to do that? How?

This is becomming my most oftenly written OpenGL clarification:
OpenGL is not a scene graph. It merely draws points, lines and triangles to a 2D framebuffer. There's no such thing like objects or a camera in OpenGL. There's only geometry primitives, which OpenGL forgets about the moment it rasterized them, and the result of this process in the framebuffer. OpenGL has no notion of "in front" or "behind".
What you're asking about is completely outside the scope of OpenGL. What you want is a scene graph implementing ray - triangle intersection picking.
And I can only reiterate what Nicol Bolas already answered:
OpenGL is for rendering; it's for drawing stuff. It's depth buffer exists for that purpose. And the depth buffer only stores the distance from the Z-plane of the camera, not the radial distance from the point to that point on the triangle.

If you want the distance between two points (like the camera and the center point of an object), then you're going to have to compute it with regular vector math: sqrt(dot(pt1 - pt2, pt1 - pt2))
OpenGL is for rendering; it's for drawing stuff. It's depth buffer exists for that purpose. And the depth buffer only stores the distance from the Z-plane of the camera, not the radial distance from the point to that point on the triangle.

Related

Computer Graphics: Practical translate to origin method

What I'm trying to do
I'm trying to implement translations on polygons manually, e.g. with plain Java AWT and no OpenGL, in order to understand the concepts better.
I want to perform a "translation to origin" before (and hence after) I do a scaling / rotation on my object, but then comes the question - by which vertex of the polygon do I calculate the distance to the origin?
Intuition
My intuition is to calculate the distance of each vertex to the origin, and once I find the closest one, calculate the x,y values required to translate it to the origin, and then translate all the polygon's verices with those x,y.
Am I right?
Another catch
I've implemented a view through camera in my program, such that I've got full viewing pipeline taking polygons in world objects, transforming all coordinates to viewing coordinates, projecting them to 2D and then transforming them to viewport coordinates.
My camera has it's position, lookAt point and up vector, and I want the scaling / rotations to be done with regard to the lookAt point.
How can I achieve this? Does it just mean to translate each polygon to the lookAt point instead to the origin?

Creating a 2D game with modern opengl

I've tried for some time to create a simple 2D Engine with modern opengl, with modern opengl I mean I'm not going to be using this:
glBegin(); glEnd(); glVertex2i(); glOrtho();
etc,
I'm using GLSL.
And I have run into some problems. Here's some questions
I'm trying to create a 2D coordinate system without using old methods like glOrtho() and glViewport(), I know i'm supposed to use a matrix here and there but how? How do i create a coordinate system?
How do I use texCoords in GLSL to load a texture in 2D space.
If someone could explain or point to a source on how to set the size of a Texture, right now I've to use a position from -1 to 1, but I want to say that the size is 16 * 16 for example.
Is there someway to print out errors from GLSL? When I have an error it's just prints out Couldn't find uniform. Because the location is == -1
How to use and create matrix to to move, scale and rotate in 2D space.
Can I use a Matrix2f instead of Matrix4f?
Please, if you can, post the code in Java Thank you
Disclaimer: I don't have much experience with modern OpenGL but I worked with recent DirectX'es. Most of general conceptions should be the same here.
How do i create a coordinate system?
Nowadays there is no fixed function pipeline so one have to do all coordinate transformations by hand inside a vertex shader. It consumes a vertex then produces one transformed to the screen space. In case of 2D graphics there's even no need to bother with projection matrices.
Consider the input stream consisting from 2D vectors. Thus rotation, scaling and translation implemented with 3D matrices. But output vectors should be 4D. Pseudocode:
input = getInputVertex();
transformed = view * world * vec3(input, 1.0f);
output = vec4(transformed, 1.0f);
Where 'world' is a world matrix of sprite and 'view' is a matrix of camera.

OpenGL - Pixel color at specific depth

I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.

Two Viewports using Java2D

This is sort of a homework question, however no expectations for code or whatever just an idea or hint towards the following problem.
I have a set of cubes in 3D world coordinates and i have to display them using two projections in two separate areas, parallel and perspective. The parallel went fine, no problems there, however displaying the same scene using perspective projection is becoming a nuisance for me.
The world to screen coordinates seemed like a good idea, but i don't know on which coordinates to apply them to, the original real coordinates, the new coordinates.
Thank you for your time.
PS: we are only allowed Java2D Api.
To perform a perspective projection, you need two additional things: the perspective point (where the "eye" is) and the projection plane. With a parallel projection, the perspective point/eye and plane can be any arbitrary distance from the objects (e.g., the cubes). But it is a little more complex with perspective projection.
Once you establish your eye and projection plane, you will need to iterate over your cubes. Ideally, you would iterate over them from the farthest cube to the eye to the nearest - that way the nearer cubes will overwrite the farther ones.
For each cube, determine the distance from the eye for each point. Then for each face (again in order of decreasing distance), calculate the projected points for each vertex. You can skip those faces with occluded points (the farthest vertex for each cube).
To calculate the projected point for a particular vertex, you need to find the point on the projection plane. This point will be where the line from the eye to the vertex intersects the projection plane. This will require some math, but should not be too difficult.

Vertices selection and state of model after rotation

I'm currently writing an application that actually acts as a "cut" tool for 3D meshes. Well, I had some problems with it now which I am clueless on how to solve, since it is my first application.
I have loaded a model from an object file onto the canvas, then on the same canvas, I use the mouse drag event to draw lines to define the cutting point.
Let us say I want to cut a ball into half and I draw the line in the middle. How do I detect the vertices of the ball under the line.
Secondly, if I rotate/translate the ball, would all the the vertices information change?
Think of what you'd do in the real world: You can't cut a ball with a line, you must use a knife (a line has no volume). To cut the ball, you must move the knife through the ball.
So what you're looking after is a plane, not a line. To get such a plane, you must use some 3D math. What you have is the canvas orientation and the "side view" of the plane (which looks like a line).
So the plane you're looking for is perpendicular to the canvas. A simple way to get such a plane is to take the canvas orientation and create a plane which has the same orientation and then rotate the plane around the line by 90°.
After that, you can visit all edges of your model and determine on which side of the plane they are. For this, determine on which side of the plane the end points of the edge are. Use the cross product. If they are on the same side (both results of the cross products will have the same sign), you can ignore the edge. Otherwise, you need to determine the intersection point of the edge and plane. Create new edges and connect them accordingly.
See this page for some background on the math. But you should find some helper methods for all this in your opengl library.
if I rotate / translate the ball, would all the the vertices information change
Of course.
It's not going to be that easy.
I assume the line you are drawing induces a plane which then cuts the sphere.
To do so, you have to calculate the intersecting area of the sphere and the plane.
This is not a trivial task and I suggest using an existing framework for this or if you really want to do this yourself, read about basic intersection problems to get a feeling for this kind of problem. This paper offers a good introduction to various intersection tests.
In general boundary represended volumes, as in your case, are difficult to handle when it comes to more advanced manipulations. Cutting a sphere in half is easy compared to burring a small hole into it. Sometimes it's better to use a volume representation, like tetrahedral meshes or CSG.
Regarding your second question, you shouldn't rotate or translate the sphere, rotate and translate the camera.

Categories

Resources