(Sorry in advance, I'm not at my computer so I can't provide code)
I have a voxel based game, and I wanted to add in transparent textures, namely glass. I realized that transparent textures would throw things off (looking through other blocks), so I used my shaders to remove any alpha components lower than the alpha value of the glass texture. Currently, the glass has an alpha of 0.26 (coordinates range from 0 - 1), and it works fine. But, if I want to add some transparency to the glass instead of removing all transparency, I run into my old issue where you can see through the blocks behind the glass.
I read in a few places that I would need to sort my geometry from front to back. Does this still pertain to my situation even after using shaders? I use display lists, for each chunk in my world, should I keep two lists, one for opaque blocks and one for transparent? So when I render, I would do one pass and render the opaque lists, and then do another render pass, and render the chunks inversely and render the transparent list?
Or is there a better way to do this?
looking through blocks behind transparent geometry means you still have depth checking enabled meaning the blocks behind the glass didn't get rendered.
I read in a few places that I would need to sort my geometry from
front to back. Does this still pertain to my situation even after
using shaders?
yes you still need to sort, geometry shaders don't solve that issue
however you only need to sort the transparent geometry when you render them after the opaque geometry (after turning on alpha blending)
Related
I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.
I'm using OpenGL wigh LWJGL in Java, but that's not important here. I'm not asking for code, but for a hint on how to do this. Language independent.
I have some region (a rectangle for simplicity), and, let's say, a big tiled map which I want to show in this area. The area is not the whole screen, I want to render something around it.
I know about a few approaches, but all are either huge pain or unsuitable.
Render the whole tiled map and everything else, including background and the frame,
on top - leaving the window. Yes, works, but it'd be pain.
Render only visible tiles and only the visible portions of the border tiles.
Again, doable but hard, and ie. when I use external font drawing library, I can't just tell it "Hey, stop at this line, there's my border." Not very good approach, I'd say.
Some OpenGL magic which I'm not aware of.
Guide me.
When your area is guaranteed to be an axis-aligned rectangle, you can just use glViewport and/or glScissor (the latter together with glEnable(GL_SCISSOR_TEST)) to prevent OpenGL from rendering outside that rectangle.
In case of modifying the viewport, the image resulting just is scaled to fit the viewport rectangle. Using the scissor test, the area is just "cut out", so not scaled with respect to the viewport setting. But the differences do not really even matter -you can get to the same result via both paths by just adjusting your transformations accordingly. Just note that, if you need to call glClear when you render your "tiled map", the clear operation will not be limited by the currently set viewport, but the scissor test will allow you to even limit the clearing.
If you're area can not be described as an axis-aligned rectangle, I'll recommend having a look at the stencil buffer. The algorithm is simple:
Clear stencil buffer to 0.
Just render the shape you want your tiled map to appear _only into the stencil buffer.
When rendering your tiled map, enable stencil test and set it up so that it will discard fragments for pixels the stencil buffer is 0.
Steps 1 and 2 do only have to be done once (as long as your area is not changing, or your windo size). Have a look at the glStencilFunc and glStencilOp functions for details of how to do that.
I'm trying to optimise a rendering engine in Java to not draw object's which are covered up by 'solid' child objects drawn in front of them, i.e. the parent is occluded by its children.
I'm wanting to know if an arbitrary BufferedImage I load in from a file contains any transparent pixels - as this affects my occlusion testing.
I've found I can use BufferedImage.getColorModel().hasAlpha() to find if the image supports alpha, but in the case that it does, it doesn't tell me if it definitely contains non-opaque pixels.
I know I could loop over the pixel data & test each one's alpha value & return as soon as I discover a non-opaque pixel, but I was wondering if there's already something native I could use, a flag that is set internally perhaps? Or something a little less intensive than iterating through pixels.
Any input appreciated, thanks.
Unfortunately, you will have to loop through each pixel (until you find a transparent pixel) to be sure.
If you don't need to be 100% sure, you could of course test only some pixels, where you think transparency is most likely to occur.
By looking at various images, I think you'll find that most images that has transparent parts contains transparency along the edges. This optimization will help in many common cases.
Unfortunately, I don't think that there's an optimization that can be done in one of the most common cases, the one where the color model allows transparency, but there really are no transparent pixels... You really need to test every pixel in this case, to know for sure.
Accessing the alpha values in its "native representation" (through the Raster/DataBuffer/SampleModel classes) is going to be faster than using BufferedImage.getRGB(x, y) and mask out the alpha values.
I'm pretty sure you'll need to loop through each pixel and check for an Alpha value.
The best alternative I can offer is to write a custom method for reading the pixel data - ie your own Raster. Within this class, as you're reading the pixel data from the source file into the data buffer, you can check for the alpha values as you go. Of course, this isn't much help if you're using a built-in image reading class, and involves a lot more effort.
I am trying to render an transparent object in opengl but sometimes the textures are visible through each other and sometimes they are not. As far as I know I have to render them from the back to the front and it is looking fine when I do it manually but only from a Specific perspective. Is there any method to calculate the order in which they should be rendererd?
If you have transparent objects, then you need to use Depth Peeling or some other Order-independent transparency method in order to properly render them if you aren't going to sort them.
I am attempting to create a "hole in the fog" effect. I have a background grid image, overlapped onto that I have a "fog" texture that I use to show that certain areas are not in view. I am attempting to cut a chunk out of "fog" that will show the area that is currently in view. I am trying to "mask" a part of the fog off the screen.
I made some images to help explain what I am after:
Background:
"Mask Image" (The full transparency has to be on the inside and not the outer rim for what I am going to use it for):
Fog (Sorry, hard to see.. Mostly Transparent):
What I want as a final Product:
I have tried:
Stencil-Buffer: I got this fully working except for one fact... I wasn't able to figure out how to retain the fading transparency of the "mask" image.
glBlendFunc: I have tried many different version of the parameters and many other methods with it (glColorMask, glBlendEquation, glBlendFuncSeparate) I started by using some parameter that I found on this website: here. I used the "glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);" as this seemed to be what I was looking for but... This is what ended up happening as a result: (Its hard to tell what is happening here but... There is fog covering the grid in the background. Though, the mask is just ending up as an fully opaque black blob when its supposed to be a transparent part in the fog.
Some previous code:
glEnable(GL_BLEND); // This is not really called here... It is called on the init function of the program as it is needed all the way through the rendering cycle.
renderFogTexture(delta, 0.55f); // This renders the fog texture over the background the 0.55f is the transparency of the image.
glBlendFunc(GL_ZERO, GL11.GL_ONE_MINUS_SRC_ALPHA); // This is the one I tried from one of the many website I have been to today.
renderFogCircles(delta); // This just draws one (or more) of the mask images to remove the fog in key places.
(I would have posted more code but after I tried many things I started removing some old code as it was getting very cluttered (I "backed them up" in block comments))
This is doable, provided that you're not doing anything with the alpha of the framebuffer currently.
Step 1: Make sure that the alpha of the framebuffer is cleared to zero. So your glClearColor call needs to set the alpha to zero. Then call glClear as normal.
Step 2: Draw the mask image before you draw the "fog". As Tim said, once you blend with your fog, you can't undo that. So you need the mask data there first.
However, you also need to render the mask specially. You only want the mask to modify the framebuffer's alpha. You don't want it to mess with the RGB color. To do that, use this function: glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE). This turns off writes to the RGB part of the color; thus, only the alpha will be modified.
Your mask texture seems to have zero where it is visible and one where it isn't. However, the algorithm needs the opposite, so you should either fix your texture or use a glTexEnv mode that will effectively flip the alpha.
After this step, your framebuffer should have an alpha of 0 where we want to see the fog, and an alpha of 1 where we don't.
Also, don't forget to undo the glColorMask call after rendering the mask. You need to get those colors back.
Step 3: Render the fog. That's easy enough; to make the masking work, you need a special blend mode. Like this one:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ZERO, GL_ONE);
The separation between the RGB and A blend portions is important. You don't want to change the framebuffer's alpha (just in case you want to render more than one layer of fog).
And you're done.
The approach you're currently taking will not work, as once you draw the fog over the whole screen, there's no way to 'erase' it.
If you're using fixed pipeline:
You can use multitexturing (glTexEnv) with fixed pipeline to combine the fog and circle textures in a single pass. This function is probably kind of confusing if you haven't used it before, you'll probably have to spend some time studying the man page. You'll do something like bind fog to glActiveTexture 0, and mask to glActiveTexture 1, enable multitexturing, and then combine them with glTexEnv. I don't remember exactly the right parameters for this.
If you're using shaders:
Use a multitexturing shader where you multiply the fog alpha with the circle texture (to zero out the alpha in the circle region), and then blend this combined texture into the background in a single pass. This is probably a more conceptually easy approach, but not sure if you're using shaders.
I'm not sure there's a way you can do this where you draw the fog and the mask in separate passes, as they both have their own alpha values, it will be difficult to combine them to get the right color result.