Why does LWJGL have a function that uses floats as coordinates? - java

I was fixing a problem in my code that I knew had something to do with this: GL11.glTexCoord2f(x,y);I changed it of course, but I wondered why it used floats as coordinates. I'm sure there is a reason, but it just seems extremely stupid to me. There is probably something I'm missing :P but whatever.

This is not specific to Java or LWJGL. As you might know, LWJGL is largely a 1:1 mapping of the original OpenGL API. And the different flavors of the glTexCoord function have been in OpenGL since version 1.0: https://www.opengl.org/sdk/docs/man2/xhtml/glTexCoord.xml
The reason of why texture coordinates are usually given as floating point values is that they refer to an "abstract" image (the texture). The size of this texture is not known. For example, you may have a mesh, and as its texture, you may use a JPG image that is 200x200 pixels large, or or an image that is 800x800 pixels large.
In order to allow arbitrary images to be used as the texture, the texture coordinates are almost always given as values between (0.0, 0.0) and (1.0, 1.0). They refer to the the total size of the image, regardless of which size this is.
Imagine a vertex with texture coordinates (0.5, 0.5). If the texture is 200x200 pixels large, then the actual "pixel" that will be used as this vertex is the pixel at (100,100). If the texture is 800x800 pixels large, then the pixel that will be used at this vertex will be the pixel at (400,400).
Using this sort of "normalized" coordinates (in the range [0.0,1.0]) is very common in graphics and other applications.

What else should it be using? The Double dataType may not be supported on all GPU's although there is some support comming up. And since a texture uv-coordinate should be between 0.0 and 1.0 (as a precentage of the height/width) I cant think of any other logical dataType that could be used here.
So is there any specific problem with using floats? And if so, please edit your post and provide more information, as for now we cannot help you any further.

Related

Changing 3d model texture based on resolution

As far as I know, you can get the screen size and density using Gdx.graphics.getDensity(), so you can load the right texture for E.g 1x, 1.5x etc..
but what about the texture that comes with the 3D model, for E.g. the texture is only intended for a maximum 1280x800px, while my android has dpi 3x.
I don't want to scale it too much because it can cause the image become too blur/fade/not sharp, anyone who knows the solution please?
EDIT:
let me explain in detail
I've one ModelInstance, texture atlas (2048x2048px) attached.
When the games is opened in 4k screen, I widen the scale of the model almost three times, causing the texture to become blurry, that makes sense because from 240dpi to 640dpi the difference is very far.
so in my opinion the solution is to make some textures atlas for 240dpi, 320dpi, 480dpi etc. the problem is I don't know how to replace the texture atlas which from the beginning already integrated with the Model? so when scaling up, texture atlas is automatically replaced with a higher one. thanks
Usually in 3D graphics the camera or model are mobile. There isn't a fixed best resolution of a texture because the camera may be very near, very far away or viewing the textured surface at a glancing angle.
The solution offered by graphics APIs are settings for texture filtering. Under magnification (where a texel takes up more than a screen pixel) you can do linear filtering for soft edges or point filtering for hard edges. Minification is more complex, you can have linear or point filtering, but you can also have mipmaps, which are a precalculated chain of successively half-sized versions of your image, typically all the way down to 1x1. You can set texture filtering to pick the nearest mipmap, blend between mipmaps, or use anisotropic filtering for better sharpness at glancing angles. Generally linear filtering for magnification, and full mipmap chains with anisotropic filtering for minification produces very good quality and good enough performance to be a good default choice.
So, you won't be giving the GPU a single texture for your model, you'll be giving it a chain of textures, and letting the GPU worry about how to sample that chain to give the correct amount of blur/sharpness. For performance and compatibility with mipmaps, it is usually a good idea to use power-of-two textures (e.g. 1024x1024 rather than 1280x800).
So, just make a 1024x1024 or 2048x2048 texture with mipmaps and appropriate filtering settings then use it on every device regardless of its resolution then quality-wise you're sorted.
If you're particularly worried about memory use or load times then there's an argument to reduce the texture size on lower resolution devices (basically, have a second asset with halved resolution for low-res devices, or just skip the highest resolution mip when loading for low-res devices), but I think that might be a premature optimization at this stage.

Best way to render a non-cubical sandbox game?

In my game, the world is made of cubes, but the cubes are divided into 5 parts: a tetrahedron and 4 corners. Each type of block has two colors. This is what a block might look like if one corner was cut, although each corner/face may have different colors from the rest.
The problem is, on the tetrahedral faces, I want the edges between the triangles to be seamless. So I can't use textures. (I could, but they would need to be high-res, and if I want to animate the colors (for example on water) this is not an option).
I've found these approaches:
Drawing each triangle on each tetrahedral face, then each square on each cubical face (using a VBO and all that stuff)
Too many polys! Lag ensues. And this was only rendering the tetrahedrals.
Using a fragment shader on world geometry
The math is simple: for each axis, find if the point is less than 0.5 within the cube and xor the results. This determines which color to use. I got lag, but I think my code is bad.
3D textures on world geometry
This seems to be the best option given how perfectly it matches my situation, but I really don't know.
Using instanced geometry with any of the above
I'm not sure about this one; I've read instancing can be slow on large scales. I would need 31 meshes, or more if I want to optimize for skipping hidden surfaces (which is probably unnecessary anyways).
Using a geometry shader
I've read geometry shaders don't perform well on large scales.
Which of these options would be the most efficient? I think using 3d and 2d textures might be the best option, but if I get lag I want to be sure it's because I'm using bad code not an inefficient approach.
Edit: Here's my shader code
#version 150 core
in vec4 pass_Position;
in vec4 pass_Color1;
in vec4 pass_Color2;
out vec4 out_Color;
void main(void) {
if ((mod(abs(pass_Position.x),1f)<=0.5f)^^(mod(abs(pass_Position.y),1f)<=0.5f)^^(mod(abs(pass_Position.z),1f)<=0.5f)) out_Color = pass_Color1;
else out_Color = pass_Color2;
}
The problem is, on the tetrahedral faces, I want the edges between the triangles to be seamless. So I can't use textures. (I could, but they would need to be high-res, and if I want to animate the colors (for example on water) this is not an option).
That's not necessarily the case. Remember that OpenGL doesn't see whole objects, but just individual triangles. So when rendering that cut face, it's in no way different to just render its flat, "fleshless" counterpart.
Any hard edge on the inner tetrahedron doesn't suffer from a texture crease as the geometrical edge is much stronger. So what I'd do is to have a separate 2D planar texture space aligned with the tetrahedral surfaces, which is shared by all faces coplanar to this (on a side note: applying this you could generate the texture coordinates using a vertex shader from the vertex position).
That being said: Simple 2D flat textures will eventually hit some limitations. Since you're effectively implementing a variant of an implicit surface tesselator (with the scalar field creating the surface being binary valued) it makes sense to think about procedural volumetric texture generation in the fragment shader.

Need Conceptual Help Rendering a Heat Map

I need to create a heatmap for android google maps. I have geolocation and points that have negative and positive weight attributed to them that I would like to visually represent. Unlike the majority of heatmaps, I want these positive and negative weights to destructively interfere; that is, when two points are close to each other and one is positive and the other is negative, the overlap of them destructively interferes, effectively not rendering areas that cancel out completely.
I plan on using the android google map's TileOverlay/TileProvider class that has the job of creating/rendering tiles based a given location and zoom. (I don't have an issue with this part.)
How should I go about rendering these Tiles? I plan on using java's Graphics class but the best that I can think of is going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel. This seems very inefficient, however, and I was looking for suggestions on a better approach.
Edit: I've considered everything from using a non-android Google Map inside of a WebView to using a TileOverlay to using a GroundOverlay. What I am now considering doing is having a large 2 dimensional array of "squares." Each square would have a long, lat, and total +/- weights. When a new data point is added, instead of rendering it exactly where it is, it will be added to the "square" that it is in. The weight of this data point will be added to the square and then I will use the GoogleMap Polygon object to render the square on the map. The ratio of +points to -points will determine the color that is rendered, with a ratio closer to 1:1 being clear, >1 being blue (cold point), and <1 being red (hot point).
Edit: a.k.a. clustering the data into small regional groups
I suggest trying
going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel.
Even if it slow, it will work. There are not too many Tiles on the screen, there are not too many pixels in each Tile and all this is done on a background thread.
All this is still followed by translating Bitmap into byte[]. The byte[] is a representation of PNG or JPG file, so it's not a simple pixel mapping from Bitmap. The last operation takes some time too and may possibly require more processing power than your whole algorithm.
Edit (moved from comment):
What you describe in the edit sounds like a simple clustering on LatLng. I can't say it's a better or worse idea, but it's something worth a try.

Drawing textured polygons with libgdx

I'm having a problem with my rendering cycle using libgdx, basically I need to fill an area with a square texture, and the last part of this area may be smaller or with a different shape than the texture, so it means that i need to render a quad of arbitrary form and slap the texture on it, cutting the parts I don't need.
I'm a bit lost on how to do this, so far I've seen that the PolygonRegion and PolygonSpriteBatch might do it for me, but I'm a bit wary of instancing a new heavy object I'll use only on one object.
Is there any alternative? Perhaps the Mesh class but i'd like to be certain.
I suggest using a Mesh to define exactly what region you want. Defining the vertex points and mapping those to the texture coordinates is a bit fiddly, but its good to know what's going on underneath some of the higher level APIs (like the *Batch bits). Additionally, the *Batch APIs are designed to share the weight of uploading a single texture across multiple objects, which sounds like it might not apply in this case. (On the other hand, even if the Batch objects are a bit "heavyweight", they may not actually be a problem in practice.)
Another approach to consider is to render the object as a square mesh, but to define your texture with transparent pixels for all the pixels outside the region. (I'm assuming the non-square shape is something you can know offline, and isn't dynamic.)
It isn't a big problem if you instantiate PolygonSpriteBatch for that purpose. The object mainly contains geometric data for buffered geometry. Of course you will need to care about correct rendering order calling flush or end when needed.
Mesh is another option but it can be a bit more work because you need to provide vertices and texture coordinates there manually.
From performance point of view rendering of one sprite is slightly faster with Mesh. I'm not sure if difference affects fps somehow in your case.
EDIT: forgot to mention, if you use SpriteBatch for rendering one object, don't use default constructor it reserves a lot of memory.

OpenGL - Pixel color at specific depth

I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.

Categories

Resources