(OpenGL) Clamping large content to smaller area - java

I'm using OpenGL wigh LWJGL in Java, but that's not important here. I'm not asking for code, but for a hint on how to do this. Language independent.
I have some region (a rectangle for simplicity), and, let's say, a big tiled map which I want to show in this area. The area is not the whole screen, I want to render something around it.
I know about a few approaches, but all are either huge pain or unsuitable.
Render the whole tiled map and everything else, including background and the frame,
on top - leaving the window. Yes, works, but it'd be pain.
Render only visible tiles and only the visible portions of the border tiles.
Again, doable but hard, and ie. when I use external font drawing library, I can't just tell it "Hey, stop at this line, there's my border." Not very good approach, I'd say.
Some OpenGL magic which I'm not aware of.
Guide me.

When your area is guaranteed to be an axis-aligned rectangle, you can just use glViewport and/or glScissor (the latter together with glEnable(GL_SCISSOR_TEST)) to prevent OpenGL from rendering outside that rectangle.
In case of modifying the viewport, the image resulting just is scaled to fit the viewport rectangle. Using the scissor test, the area is just "cut out", so not scaled with respect to the viewport setting. But the differences do not really even matter -you can get to the same result via both paths by just adjusting your transformations accordingly. Just note that, if you need to call glClear when you render your "tiled map", the clear operation will not be limited by the currently set viewport, but the scissor test will allow you to even limit the clearing.
If you're area can not be described as an axis-aligned rectangle, I'll recommend having a look at the stencil buffer. The algorithm is simple:
Clear stencil buffer to 0.
Just render the shape you want your tiled map to appear _only into the stencil buffer.
When rendering your tiled map, enable stencil test and set it up so that it will discard fragments for pixels the stencil buffer is 0.
Steps 1 and 2 do only have to be done once (as long as your area is not changing, or your windo size). Have a look at the glStencilFunc and glStencilOp functions for details of how to do that.

Related

Confused with image scaling and positioning in libgdx

I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.

Render to half-sized framebuffer, then scale it up to achieve pixelated look

I am about to create a little 2D desktop game with libGdx and I want it to have that retro pixelated look you know from games like "Flappy Bird". To achieve that effect, I thought of the following:
Create a game window (e.g. 640x480)
Create a framebuffer half that size (i.e. 320x200)
Render everything to the framebuffer
Get the texture from the framebuffer
Draw the texture to the screen with SpriteBatch, scaling it 2 times up and using TextureFilter.Nearest.
I know I could scale each sprite individually with SpriteBatch.draw() but I thought, rendering everything at its original resolution and just scale up the final composition might be easier.
So would the above technique be an appropriate way of getting that pixelated look?
What you have in mind sounds like a perfectly fine approach. The downside is that it does involve an additional data copy, but on the other hand your original rendering is for only 1/4 of the pixels, which saves you quite a bit of rendering overhead.
In plain OpenGL, you could use glBlitFramebuffer() for step 5. This requires OpenGL 3.0 or higher. It's essentially the same operation as drawing a textured quad, but it's a single call, and the underlying implementation could potentially be more efficient.

OpenGL 3D transparency in voxels

(Sorry in advance, I'm not at my computer so I can't provide code)
I have a voxel based game, and I wanted to add in transparent textures, namely glass. I realized that transparent textures would throw things off (looking through other blocks), so I used my shaders to remove any alpha components lower than the alpha value of the glass texture. Currently, the glass has an alpha of 0.26 (coordinates range from 0 - 1), and it works fine. But, if I want to add some transparency to the glass instead of removing all transparency, I run into my old issue where you can see through the blocks behind the glass.
I read in a few places that I would need to sort my geometry from front to back. Does this still pertain to my situation even after using shaders? I use display lists, for each chunk in my world, should I keep two lists, one for opaque blocks and one for transparent? So when I render, I would do one pass and render the opaque lists, and then do another render pass, and render the chunks inversely and render the transparent list?
Or is there a better way to do this?
looking through blocks behind transparent geometry means you still have depth checking enabled meaning the blocks behind the glass didn't get rendered.
I read in a few places that I would need to sort my geometry from
front to back. Does this still pertain to my situation even after
using shaders?
yes you still need to sort, geometry shaders don't solve that issue
however you only need to sort the transparent geometry when you render them after the opaque geometry (after turning on alpha blending)

Using a semi transparent texture to "mask" out a part of the background

I am attempting to create a "hole in the fog" effect. I have a background grid image, overlapped onto that I have a "fog" texture that I use to show that certain areas are not in view. I am attempting to cut a chunk out of "fog" that will show the area that is currently in view. I am trying to "mask" a part of the fog off the screen.
I made some images to help explain what I am after:
Background:
"Mask Image" (The full transparency has to be on the inside and not the outer rim for what I am going to use it for):
Fog (Sorry, hard to see.. Mostly Transparent):
What I want as a final Product:
I have tried:
Stencil-Buffer: I got this fully working except for one fact... I wasn't able to figure out how to retain the fading transparency of the "mask" image.
glBlendFunc: I have tried many different version of the parameters and many other methods with it (glColorMask, glBlendEquation, glBlendFuncSeparate) I started by using some parameter that I found on this website: here. I used the "glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);" as this seemed to be what I was looking for but... This is what ended up happening as a result: (Its hard to tell what is happening here but... There is fog covering the grid in the background. Though, the mask is just ending up as an fully opaque black blob when its supposed to be a transparent part in the fog.
Some previous code:
glEnable(GL_BLEND); // This is not really called here... It is called on the init function of the program as it is needed all the way through the rendering cycle.
renderFogTexture(delta, 0.55f); // This renders the fog texture over the background the 0.55f is the transparency of the image.
glBlendFunc(GL_ZERO, GL11.GL_ONE_MINUS_SRC_ALPHA); // This is the one I tried from one of the many website I have been to today.
renderFogCircles(delta); // This just draws one (or more) of the mask images to remove the fog in key places.
(I would have posted more code but after I tried many things I started removing some old code as it was getting very cluttered (I "backed them up" in block comments))
This is doable, provided that you're not doing anything with the alpha of the framebuffer currently.
Step 1: Make sure that the alpha of the framebuffer is cleared to zero. So your glClearColor call needs to set the alpha to zero. Then call glClear as normal.
Step 2: Draw the mask image before you draw the "fog". As Tim said, once you blend with your fog, you can't undo that. So you need the mask data there first.
However, you also need to render the mask specially. You only want the mask to modify the framebuffer's alpha. You don't want it to mess with the RGB color. To do that, use this function: glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE). This turns off writes to the RGB part of the color; thus, only the alpha will be modified.
Your mask texture seems to have zero where it is visible and one where it isn't. However, the algorithm needs the opposite, so you should either fix your texture or use a glTexEnv mode that will effectively flip the alpha.
After this step, your framebuffer should have an alpha of 0 where we want to see the fog, and an alpha of 1 where we don't.
Also, don't forget to undo the glColorMask call after rendering the mask. You need to get those colors back.
Step 3: Render the fog. That's easy enough; to make the masking work, you need a special blend mode. Like this one:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ZERO, GL_ONE);
The separation between the RGB and A blend portions is important. You don't want to change the framebuffer's alpha (just in case you want to render more than one layer of fog).
And you're done.
The approach you're currently taking will not work, as once you draw the fog over the whole screen, there's no way to 'erase' it.
If you're using fixed pipeline:
You can use multitexturing (glTexEnv) with fixed pipeline to combine the fog and circle textures in a single pass. This function is probably kind of confusing if you haven't used it before, you'll probably have to spend some time studying the man page. You'll do something like bind fog to glActiveTexture 0, and mask to glActiveTexture 1, enable multitexturing, and then combine them with glTexEnv. I don't remember exactly the right parameters for this.
If you're using shaders:
Use a multitexturing shader where you multiply the fog alpha with the circle texture (to zero out the alpha in the circle region), and then blend this combined texture into the background in a single pass. This is probably a more conceptually easy approach, but not sure if you're using shaders.
I'm not sure there's a way you can do this where you draw the fog and the mask in separate passes, as they both have their own alpha values, it will be difficult to combine them to get the right color result.

Java Tile Game Zooming

I am building a 2D top-down tile based game in Java. Naturally you can pan around and zoom in on the game, currently zooming in on 10 different levels, where each tile ranges 10x10 pixels to 100x100 pixels appropriately. Currently, the the tiles for each zoom level are stored in separate sprite sheets, read in at the startup of the program and stored in a buffered image array. I am sure this can't be the best way to go about this.
I am looking for any tips to enhance efficiency for the long-term, would it be better to have the 100x100 tiles only and scale them dynamically in java; somehow use vector graphics in java (I'm sure how, but I'm sure google could help me) or what?
Many thanks!
I'd go dynamic.
Normally in computer graphics you use matrices that, applied to the graphics context, modify everything you draw on it.
This is used to modify position, scale, rotation, etc. Rather than subtract the camera position to every tile, you apply the translation once to the graphics context, and then you draw your tiles in world position. The graphics context will take care of placing the tiles in the correct screen space.
I suggest you the following reads:
http://docs.oracle.com/javase/tutorial/2d/advanced/transforming.html
http://www.javalobby.org/java/forums/t19387.html
If you're doing fixed zooming (i.e. each zoom level is a fixed distance from the camer), as opposed to fluid zooming (the player can zoom in by 3.3x, 7.5x, and not just 1x, 2x, 3x, etc.) then it's massively wasteful to try to solve this by simply applying a zoom transform. It's tempting because that's the least complicated approach, and it's easy to understand from an implementation standpoint, but that means that at maximum zoom-out, you're going to be rendering an area that's 10x larger in the X direction, and 10x larger in the Y direction - so the area of the world that you have to render is 100x larger than at maximum zoom-in. I also doubt that you'll like the way your textures get squished by the hardware as you're zooming out. Computer graphics isn't the same as optics - subpixel rendering, and other things that happen in computer graphics aren't going to make your textures look very good if you hand that task off the the software/hardware.
Even if you do fluid zooming, I would still do level-of-detail textures, and dynamically swap them out depending on the distance between the world being rendered, and the camera.
Also, 10 zoom levels? Are you sure you really need 10 zoom levels? Zoom is usually used in 2D games to allow you to perform different activities at different levels of detail because a particular zoom level is especially well suited for a certain set of activities. I don't remember any 2D game that needed 10 zoom levels to accomplish this. 3-5 is the most I've ever seen, and I've never felt that it wasn't enough. It also seems like a lot of art work to produce the images at every zoom level for 10 zoom levels.
You're also likely going to find that applying an AffineTransform sounds like a good idea, but that it's extremely computationally expensive, and if you need 60fps performance, you're highly unlikely to achieve it this way. Don't take my word for it though, go try it and see how badly it falls over on itself.

Categories

Resources