does LibGDX draw outside of the camera? - java

If I draw a texture using SpriteBatch that is not visible to the camera or viewport, does it still render and use my GPU?
a little something like:
batch.draw(img, 9999999f, 9999999f, 1f, 1f)
or do I have to check if it's out of frame and not draw it in the first place?

It does draw it. So the vertex shader will run for the four vertices of the texture or texture region. Since the vertex shader projects it to somewhere outside the visible space, the fragment shader will not be run.
For each sprite drawn, the vertex shader program is run four times regardless of whether the sprite is visible and the fragment shader program is run roughly once for every pixel of the sprite that appears on screen. A modern low end phone can easily handle hundreds of “wasted” sprites being drawn off-screen.
It’s up to you to decide whether it is worth calculating if it will be visible and skip drawing it. If you do this for each individual sprite, it will be costly on the CPU for a comparatively small savings of GPU. If you think about it, checking an individual sprite based on whether any of its four corners is visible is running your own copy of the vertex shader program on the CPU redundantly just in case it saves you from having to repeat that program on the GPU. And the GPU is far more optimized for this kind of thing. (If you're using orthographic projection, the CPU's version can at least be simpler than the GPU version because it becomes a simple 2D comparison.)
For that reason, if you’re going to check them first, you would check groups of them at a time. So you might organize your game world into sections and check the outer bounds of a whole section before deciding to draw all or none of that section's sprites. For it to be worthwhile, each section should be big enough to encompass a least a couple hundred sprites.

Related

Real time screen recording in a LibGDX screen

In short, I'm making a simulation where I have a bunch of creatures that can see each other. The way I want to do this is to capture an area around each creature and give it to their neural network, and make them evolve to recognize their surroundings. I am coding this using LibGDX, and I don't plan on making screenshots every single frame because I can imagine that that is already a very poor idea. However, the problem is that I don't know how to get the pixels inside a defined square without capturing the entire screen and then cherry picking what I want for each creature, which will cause a MASSIVE lag spike, since the area these creatures will be in is 2000x2000, and therefore 12 million different values (4 million RGB values).
Each creature is about 5 pixels (width and height), so my idea is to give them a 16x16 area around them, which is why iterating through the entire frame buffer won't work, it would pointlessly iterate through millions of values before finding the ones I asked for.
I would also need to be able to take pictures outside of the screen (as in, the part outside the window's boundaries), if that is even possible.
How can I achieve this? I'm aiming for performance, but I do not mind distributing the load between multiple frames or even multithreading.
The problem is you can't query pixels in a framebuffer.
You can capture a texture from a framebuffer, and you can convert a texture to a pixmap.
libgdx TextureRegion to Pixmap
You can then getPixel(int x, int y) against the pixmap.
However, maybe going the other way would be better.
Start with a pixmap, work with the pixmap, and for each frame convert the pixmap to a texture and render that texture fullscreen. This also removes the need for the creatures environment to match the screen resolution (although you could still set it up like that).

Drawing many textured squads OpenGL ES 2.0

I'm programming an Android App where a grid is drawn, which you can move around and move in it's direction. The grid consists of about 2000 to 5000 quads, each with a different texture. I defined 4 vertices and use an index buffer to draw each quad. Before drawing I position it using a model matrix. As you can move in my scene I use view frustum culling, which increases the performance in some situations. Unfortunately there might be the case where I will need to draw all of the quads, so I want to ask how I prevent slow drawing.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024). I think calling glDrawElements() for each squad is what makes me slow, but I don't know how I can change it.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
I look forward for any kind of help.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024).
You can fit 64 256x256 textures into a 2048x2048 atlas, that's a huge amount, so you should definitely atlas. Even getting 4 1024x1024 onto a 2048x2048 is worth doing, it can quarter your draw call count.
And as WLGfx says in the comments to your question, you should batch up any quads that use the same texture (with atlasing there will be a lot more of these).
I think this would be enough, but you still might have a pretty high drawcall count in your fully zoomed-out view. After implementing atlasing and batching, if performance here is still a problem, you could create a separate asset set of thumb-nail textures at, say, quarter resolution (so a 256x256 becomes 64x64). This thumbnail asset set would fit onto just a handful of 2048x2048 atlas sheets, and you could switch to it when zoomed out far enough.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
This could work, as long as your scene is very static, if the quads are moving/changing every frame, then it might not help. Also, there might be a noticeable framerate hitch when you have to do the full redraw.

Texture partially off screen - performance difference

On the picture is an example of two situations, where a textured polygon is being rendered. All done by Opengl ES2.
A) the polygon is partially off the viewport
B) the polygon is completely inside it
My question:
Is situation 'A)' going to consume less system/gpu resources*, because the texture is partially off screen, or will it perform just the same as if i rendered it inside of the viewport and why?
*"Resources" - meaning speed, not memory.
I know that opengl does calculate vertices first, before rendering texture and if the vertices are off screen/viewport, it skips any further calculations, but is that the same case with textured object that is partially off screen? Will it omit the part of the texture that is not visible?
Situation A should be faster. Vertex processing will be the same. After that, clipping to the view volume is applied. In situation A, part of the polygon will be clipped, while the whole polygon goes through in situation B. After that, the polygon is rasterized, and the resulting fragments go into fragment processing. Since there are fewer fragments in situation A, there's less work to do in this stage. The fragment shader does the texture sampling, so only the visible parts of the texture will be sampled in this case.
Clipping can also happen after the fragments are generated, but I would always expect it to be done before the fragment shader.
While there will always be less work in situation A, this doesn't necessarily mean that you'll see a speed difference in the rendering. You could have bottlenecks in other parts of the pipeline, or in your app code.

OpenGL - Pixel color at specific depth

I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.

OpenGL: Create a sky box?

I'm new to OpenGL. I'm using JOGL.
I would like to create a sky for my world that I can texture with clouds or stars. I'm not sure what the best way to do this is. My first instinct is to make a really big sphere with quadric orientation GLU_INSIDE, and texture that. Is there a better way?
A skybox is a pretty good way to go. You'll want to use a cube map for this. Basically, you render a cube around the camera and map a texture onto the inside of each face of the cube. I believe OpenGL may include this in its fixed function pipeline, but in case you're taking the shader approach (fixed function is deprecated anyway), you'll want to use cube map samplers (samplerCUBE in Cg, not sure about GLSL). When drawing the cube map, you also want to remove translation from the modelview matrix but keep the rotation (this causes the skybox to "follow" the camera but allows you to look around at different parts of the sky).
The best thing to do is actually draw the cube map after drawing all opaque objects. This may seem strange because by default the sky will block other objects, but you use the following trick (if using shaders) to avoid this: when writing the final output position in the vertex shader, instead of writing out .xyzw, write .xyww. This will force the sky to the far plane which causes it to be behind everything. The advantage to this is that there is absolutely 0 overdraw!
Yes.
Making a really big sphere has two major problems. First, you may encounter problems with clipping. The sky may disappear if it is outside of your far clipping distance. Additionally, objects that enter your sky box from a distance will visually pass through a very solid wall. Second, you are wasting a lot of polygons(and a lot of pain) for a very simple effect.
Most people actually use a small cube(Hence the name "Sky box"). You need to render the cube in the pre-pass with depth testing turned off. Thus, all objects will render on top of the cube regardless of their actual distance to you. Just make sure that the length of a side is greater than twice your near clipping distance, and you should be fine.
Spheres are nice to handle as they easily avoid distortions, corners etc. , which may be visible in some situations. Another possibility is a cylinder.
For a really high quality sky you can make a sky lighting simulation, setting the sphere colors depending on the time (=> sun position!) and direction, and add some clouds as 3D objects between the sky sphere and the view position.

Categories

Resources