I've been trying to port my previous game from C# to Java. I'm wondering how I can create graphics layers that I can draw tiles on.
Besides the depth buffer, the color buffer and the stencil buffer you can use Frame Buffer Object(FBO) http://www.songho.ca/opengl/gl_fbo.html.
It can be used as a drawing destination, for example to make a mirror you first render the mirror point of view onto some temporary texture and then you render the mirror with this texture, same way you can make a texture for each layer so you can draw exactly on the layer you need and in the end render all the layers in different heights (or what you want to do with them).
Or like Tim commented simply when you want to draw something on layer 'n' you render it on height z=n but that way you wont have a physical layer image but all of them combined, so if you need them for some after image processing (special effects on different layers) or saving them as images you should use FBO. But in some cases you can just apply different shaders when drawing on different layer.
FBO is harder to use but a lot powerful tool.
For me works best (in 2D games):
Z-Buffer: first setup Z-buffer and when you draw you define each time a Z value and that's it (but fails at half transparent objects)
knowing the draw order: draw the low layer first, top layer last (slower than z-Buffer)
Related
I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.
I am about to create a little 2D desktop game with libGdx and I want it to have that retro pixelated look you know from games like "Flappy Bird". To achieve that effect, I thought of the following:
Create a game window (e.g. 640x480)
Create a framebuffer half that size (i.e. 320x200)
Render everything to the framebuffer
Get the texture from the framebuffer
Draw the texture to the screen with SpriteBatch, scaling it 2 times up and using TextureFilter.Nearest.
I know I could scale each sprite individually with SpriteBatch.draw() but I thought, rendering everything at its original resolution and just scale up the final composition might be easier.
So would the above technique be an appropriate way of getting that pixelated look?
What you have in mind sounds like a perfectly fine approach. The downside is that it does involve an additional data copy, but on the other hand your original rendering is for only 1/4 of the pixels, which saves you quite a bit of rendering overhead.
In plain OpenGL, you could use glBlitFramebuffer() for step 5. This requires OpenGL 3.0 or higher. It's essentially the same operation as drawing a textured quad, but it's a single call, and the underlying implementation could potentially be more efficient.
(Sorry in advance, I'm not at my computer so I can't provide code)
I have a voxel based game, and I wanted to add in transparent textures, namely glass. I realized that transparent textures would throw things off (looking through other blocks), so I used my shaders to remove any alpha components lower than the alpha value of the glass texture. Currently, the glass has an alpha of 0.26 (coordinates range from 0 - 1), and it works fine. But, if I want to add some transparency to the glass instead of removing all transparency, I run into my old issue where you can see through the blocks behind the glass.
I read in a few places that I would need to sort my geometry from front to back. Does this still pertain to my situation even after using shaders? I use display lists, for each chunk in my world, should I keep two lists, one for opaque blocks and one for transparent? So when I render, I would do one pass and render the opaque lists, and then do another render pass, and render the chunks inversely and render the transparent list?
Or is there a better way to do this?
looking through blocks behind transparent geometry means you still have depth checking enabled meaning the blocks behind the glass didn't get rendered.
I read in a few places that I would need to sort my geometry from
front to back. Does this still pertain to my situation even after
using shaders?
yes you still need to sort, geometry shaders don't solve that issue
however you only need to sort the transparent geometry when you render them after the opaque geometry (after turning on alpha blending)
I have a graphics application in JAVA, which is made up of many different shapes (lines, circles, arcs, etc, which are drawn via the Graphics.drawLine(), drawArc()... methods). I would like to create mouse-over events on many, if not all of the drawn objects.
What I was thinking was to store some sort of bitmap with metadata in it, and use that to figure out which object the mouse is over. Is there a way to do this in Java? (looping through all the objects per mouse move doesn't seem viable).
Thanks,
John
Key-color solution
(moved from comment)
Create an off-screen graphics buffer (like BufferedImage), same size as subject image.
Draw all objects into this buffer. Each object with one own color. Depending on object count you can optimize image buffer: For example use 8-bit graphics.
Read resulting image buffer by pixel (example Java - get pixel array from image). Determine pixel color at current mouse position, and map color index (or RGB value) to the source object.
Pros:
The solution is "pixel-accurate": Object boundaries are exact - pixel to pixel.
Easy to solve overlapping objects problem. Just draw them at the desired order.
Object complexity is not limited. Theoretically bitmaps are also possible.
Cons:
To move one object, the complete off-screen buffer must be repainted
Number of objects can be limited when using low-bit image buffer
It depends on your specifications. You do not mention if those shapes are allowed to overlap, to move, how many of them can exist etc.
Solution a) The easiest approach that comes to mind is to implement each shape as a JComponent descedant (e.g. JPanel). So you would have a CirclePanel, an ArcPanel etc that extend JPanel and each one of them paints itself in the same way it is being done now.
Having the shapes as a JComponent allows you to add a MouseListener to each panel that would then handle the mouseEntered(), mouseExited() events.
Solution b) If on the other hand you need to draw all the shapes on a single component's area (as I understand is the case now) then you still do not need to iterate over all the shapes. You just need to introduce an algorithm to categorize the shapes based on their position, to be able to exclude them fast inside your "isMouseOver(Shape s)" test procedure.
For example lets say you divide the area to 2 equal sub-areas left and right (let's call them tiles). When you create each shape you test which tile they intersect to, and you store this information both in the shape and in the corresponding tile.
Now when you need to test if the mouse is over a shape, you decide which tile the mouse is over. This way you only have to check shapes that intersect either the left or the right tile. Assuming that your shapes are distributed uniformly on the screen, you have just rejected 50% of the shapes with one test.
Depending on how many shapes you have, you could use 4 or 8 tiles, or you could even create/delete tiles dynamically (e.g. based on how many objects tend to gather in one area of the screen or not).
I would suggest to try the first solution because it is easier and a cleaner approach. If you decide that it does not fit your needs, you could then go for an approach similar to the second one.
I have rendered a 3D scene in OpenGL viewed from the gluOrtho perspective. In my application I am looking at the front face of a cube of volume 100x70x60mm (which I have as 1000x700x600 pixels). Inside this cube I have rendered a simple blue sphere which sits exactly in the middle and 'fills' the cube (radius 300 pixels).
I now want to read the color value of pixels (in 3D) at specific points within the cube; i.e. I wish to know if say point (100,100,-200) is blue or blank (black).
glReadPixels only allows 2D extraction of color and I have tried it with the DEPTH_COMPONENT but am unsure what this should return in byte form? Is there a way to combine the two? Am I missing something?
I am using Eclipse with Java and JOGL.
This can't be done in the context of OpenGL--you'll need some sort of scene graph or other space partitioning scheme working in concert with your application's data structures.
The reason is simple: the frame buffer only stores the color and depth of the fragment nearest to the eye at each pixel location (assuming a normal GL_LESS depth function). The depth value stored in the Z-buffer is used to determine if each subsequent fragment is closer or farther from the eye than the existing fragment, and thus whether the new fragment should replace the old or not. The frame buffer only stores color and depth values from the most recent winner of the depth test, not the entire set of fragments that would have mapped to that pixel location. Indeed, there would be no way to bound the amount of graphics memory required if that were the case.
You're not the first to fall for this misconception, so I say it the most blunt way possible: OpenGL doesn't work that way. OpenGL never(!) deals with objects or any complex scenes. The only thing OpenGL knows about are framebuffers, shaders and single triangles. Whenever you draw an object, usually composed of triangles, OpenGL will only see each triangle at a time. And once something has been drawn to the framebuffer, whatever has been there before is lost.
There are algorithms based on the concepts of rasterizers (like OpenGL is) that decompose a rendered scene into it's parts, depth peeling would be one of them.