sizing images with OrthographicCamera - java

I am new to libgdx and this question might be obvious since they skip it in every tutorial.
But say I set a camera up like this:
cam = new OrthographicCamera(100, 100);
This means I will now be working with my own units instead of pixels. So how do I know what size to make an image? Say for example I want an image to fill the width of the camera and half of the height. How would I do this? Do I make the image 100x50px? that makes no sense to me.

You say you are working with your own units when you define your camera, yet you are still thinking with pixels when you ask whether you should make your image 100x50px.
Since you are working with your own units, I would assume that they are not completely detached from original pixel units, meaning that everything should now be measured by your units including the size of the images.
If you can calculate what 1 unit of your represents pixel-wise, you can then determine the scale to which you can scale all of your images.
Then you can say that your image should be 100x50units in size, you don't need to make the image to fit this condition, you just need to adjust its scale so that it corresponds with your defined unit measurement.
If you are using SpriteBatch to draw your images, you might find that a couple of the defined draw overloads documented in the API can be given scale for both X and Y and could prove to be useful in this scenario.

Related

What does virtual display in Android do with Image pixels when smaller display dimensions are set than grabbed one?

I dont understand what happens with pixels in Virtual Display in Android when output dimensions are reduced compared to the input ones ?
When I have for example input = size of my Display = 1920x960 and I set outputs to be 1920/3 and 960/3, what happens in that case with image pixels:
pixel density is increased or
maybe it takes only smaller part of screen that is centered and has dimensions 640x320 or
something else?
Additionally, is there a way that I can only grab center part of screen as in picture below?
By digging in the AOSP, watching, looking at a thread and experimenting~
I have come to my own (might be overly) simplified conclusion.
Android calls the native Java SDK which produces the information that's needed to render a bitmap/pixels on a Java program.
If the results are the same don't need to pass/copy it again to the GPU.
If the results are not the same pass/copy it to the GPU to be "invalidated" then re-rendered.
Now to your question.
By looking at the Bitmap class and looking at this thread It came to my mind that resizing depends on the scaling ratio passed on to the Matrix class.
If resized, it will expensively create a new Bitmap that looks like something either a pretty-bad higher pixel-density or not-so-smooth lower pixel density.
If the pixel-density is increased (smaller dimensions, your case) it will look squashed and if need be, the colors are averaged to the nearest neighbouring pixels. ("kind of" like how JPEG works).
After resizing it will still stay to it's origin (top-left part of the rendered object) which is defined by it's X and Y coordinates.
For your second question, about screen grabbing you can take a look at this and then programatically resize the image by doing something like this:
//...
Bitmap.createBitmap(screenshot_bitmap, left, top, right, bottom);
//...

Real time screen recording in a LibGDX screen

In short, I'm making a simulation where I have a bunch of creatures that can see each other. The way I want to do this is to capture an area around each creature and give it to their neural network, and make them evolve to recognize their surroundings. I am coding this using LibGDX, and I don't plan on making screenshots every single frame because I can imagine that that is already a very poor idea. However, the problem is that I don't know how to get the pixels inside a defined square without capturing the entire screen and then cherry picking what I want for each creature, which will cause a MASSIVE lag spike, since the area these creatures will be in is 2000x2000, and therefore 12 million different values (4 million RGB values).
Each creature is about 5 pixels (width and height), so my idea is to give them a 16x16 area around them, which is why iterating through the entire frame buffer won't work, it would pointlessly iterate through millions of values before finding the ones I asked for.
I would also need to be able to take pictures outside of the screen (as in, the part outside the window's boundaries), if that is even possible.
How can I achieve this? I'm aiming for performance, but I do not mind distributing the load between multiple frames or even multithreading.
The problem is you can't query pixels in a framebuffer.
You can capture a texture from a framebuffer, and you can convert a texture to a pixmap.
libgdx TextureRegion to Pixmap
You can then getPixel(int x, int y) against the pixmap.
However, maybe going the other way would be better.
Start with a pixmap, work with the pixmap, and for each frame convert the pixmap to a texture and render that texture fullscreen. This also removes the need for the creatures environment to match the screen resolution (although you could still set it up like that).

Confused with image scaling and positioning in libgdx

I'm having quite a bit of difficulty wrapping my head around the actual display side of things with libgdx. That is, it just seems fairly jumbled in terms of what needs to be done in order to actually put something up onto the screen. I guess my confusion can sort of be separated into two parts:
What exactly needs to be done in terms of creating an image? There's
Texture, TextureRegion, TextureAtlas, Sprite, Batch, and probably a
few other art related assets that I'm missing. How do these all
relate and tie into each other? What's the "production chain" among
these I guess would be a way of putting it.
In terms of putting
whatever is created from the stuff above onto the monitor or
display, how do the different coordinate and sizing measures relate
and translate to and from each other? Say there's some image X that
I want to put on the screen. IT's got it's own set of dimensions and
coordinates, but then there's also a viewport size (is there a
viewport position?) and a camera position (is there a camera size?).
On top of all that, there's also the overall dispaly size that's
from Gdx.graphics. A few examples of things I might want to do could
be as follow:
X is my "global map" that is bigger than my screen
size. I want to be able to scroll/pan across it. What are the
coordinates/positions I should use when displaying it?
Y is bigger
than my screen size. I want to scale it down and have it always be
in the center of the screen/display. What scaling factor do I use
here, and which coordinates/positions?
Z is smaller than my screen
size. I want to stick it in the upper left corner of my screen and
have it "stick" to the global map I mentioned earlier. Which
positioning system do I use?
Sorry if that was a bunch of stuff... I guess the tl;dr of that second part is just which set of positions/coordinates, sizes, and scales am I supposed to do everything in terms of?
I know this might be a lot to ask at once, and I also know that most of this stuff can be found online, but after sifting through tutorial after tutorial, I can't seem to get a straight answer as to how these things all relate to each other. Any help would be appreciated.
Texture is essentially the raw image data.
TextureRegion allows you to grab smaller areas from a larger texture. For example, it is common practice to pack all of the images for your game/app into a single large texture (the LibGDX “TexturePacker” is a separate program that does this) and then use regions of the larger texture for your individual graphics. This is done because switching textures is a heavy and slow operation and you want to minimize this process.
When you pack your images into a single large image with the TexturePacker it creates a “.atlas” file which stores the names and locations of your individual images. TextureAtlas allows you to load the .atlas file and then extract your original images to use in your program.
Sprite adds position and color capabilities to the texture. Notice that the Texture API has no methods for setting/getting position or color. Sprites will be your characters and other objects that you can actually move around and position on the screen.
Batch/SpriteBatch is an efficient way of drawing multiple sprites to the screen. Instead of making drawing calls for each sprite one at a time the Batch does multiple drawing calls at once.
And hopefully I’m not adding to the confusion, but another I option I really like is using the “Actor” and “Stage” classes over the “Sprite” and “SpriteBatch” classes. Actor is similar to Sprite but adds additional functionality for moving/animating, via the act method. The Stage replaces the SpriteBatch as it uses its own internal SpriteBatch so you do not need to use the SpriteBatch explicitly.
There is also an entire set of UI components (table, button, textfield, slider, progress bar, etc) which are all based off of Actor and work with the Stage.
I can’t really help with question 2. I stick to UI-based apps, so I don’t know the best practices for working with large game worlds. But hopefully someone more knowledgeable in that area can help you with that.
This was to long to reply as a comment so I’m responding as another answer...
I think both Sprite/SpriteBatch and Actor/Stage are equally powerful as you can still animate and move with Sprite/SpriteBatch, but Actor/Stage is easier to work with. The stage has two methods called “act” and “draw” which allows the stage to update and draw every actor it contains very easily. You override the act method for each of your actors to specify what kind of action you want it to do. Look up a few different tutorials for Stage/Actor with sample code and it should become clear how to use it.
Also, I was slightly incorrect before that “Actor” is equivalent to Sprite, because Sprite includes a texture, but Actor by itself does not have any kind of graphical component. There is an extension of Actor called “Image” that includes a Drawable, so the Image class is actually the equivalent to Sprite. Actor is the base class that provides the methods for acting (or “updating”), but it doesn’t have to be graphical. I've used Actors for other purposes such as triggering audio sounds at specific times.
Atlas creates the large Texture containing all of your png files and then allows you to get regions from it for individual png's. So the pipeline for getting a specific png graphic would be Atlas > Region > Sprite/Image. Both Image and Sprite classes have constructors that take a region.

Game Dev - Working with Screen Coordinates vs World Coordinates

A friend and myself are new to game development, and we had a discussion regarding World Coordinates and Screen Coordinates.
We are following a wonderful online tutorial series for libGDX and they are using a 100 PPM (pixels per meter) scaling factor. If you re-size the screen, the scaling of objects no longer works. My friend is convinced that it is not a problem, and he may be right. But, I'm under the impression that when developing a game, the developers should typically only work with the pre-defined world coordinate system and let the camera transform it to the chosen screen coordinates. I do understand the need for reverse transformations when using mouseclicks, etc. But, the placing and scaling of objects in the world space is my concern.
I would like to reach out to this community for some professional feedback.
Thats one of the bigest problem of almost all Libgdx tutorials. They are great, but the pixel to meter/units conversation is just wrong.
Libgdx offers a great solution for that with Camera and an even better solution with the new Viewport classes (which under the hood work with Camera).
Its is really simple and will solve the problem of different screen sizes/aspect rations.
Just choose a Virtual_Width and Virtual_Height (think about it in meters or similar units).
For exampl, you have humans fighting each other in a 2D platformer game. LEts say our humans are 2m tall, so think about, how much screenspace should one human use? If we say, a human should take 1/10 of the screen space, our virtual height is 10*2=20. Now think about the primary aspect ration you are targeting. Lets say it is 16/9, so you have a virtual width of about 35.
Next, you need to think about what kind of Viewport you want. You sure want to use a Viewport, which supports Virtual_Width and Virtual_Heigth.
You may want a Viewport, which keeps the aspect ratio and fills the rest of the screen (if the screen has different aspect ratio) with black bars (FitViewport) or you may want the Viewport to fill the whole screen by stretching the units (StretchViewport).
Now just create the Viewport with your virtual width and heigth and update it in the resize() method with the given width and height.
Hope it helps.
It's be better name as Units per meter
And when you resize your screen you just set a new projective matrix, so everything works fine )
What you should worry about it's a aspect ratio.
Everything rest is doesn't matter.
So answering your question - Stay with world coordinates.
It also make simple add physics, light calculations, any dimensions ( 1.8 units instead 243 pixels )

Pixels to dips, desktop and android

I am using Libgdx to code an android game and as you may know, many screen resolutions cause some problems if done incorrectly. So I am trying to use this DP unit rather than pixels.
However, I have this method here:
public static float pixelToDP(float dp){
return dp * Gdx.graphics.getDensity();
}
the Gdx.graphics.getDensity() method actually gets the SCALE, so it's already done for me.
Now the problem, libgdx is cross platform which is good for testing. When I launch this on my S4 which has a resolution of 1920x1080 with a dpi of a whopping 480, opposed to my terrible and overpriced laptop which has 1366x768 # 92dpi it is placed exactly where I want it. On desktop it is way off, a good few hundred pixels on the X and Y axis.
Is this due to the fact my screen dpi is measured #92dpi, the resolution is a lot lower and the actual game is not fullscreen on the desktop?
Here is the code for drawing the object:
table.setPosition(MathHelper.pixelToDP(150), MathHelper.pixelToDP(200));
In order to get it perfect on desktop I have to do:
table.setPosition(MathHelper.pixelToDP(480), MathHelper.pixelToDP(700));
Which is not even visible on my phone, since the scale is actually 3x, which puts it a good 200 pixels off the screen on the Y axis.
Is there a way around this? Or am I basically going to have to deal with doing platform checks and different blocks of code?
Possible solution:
So I changed my dp conversion method, if I was to do 100 * 0.5 it would return a new value of 50 but in reality I want the orignal value of 100 + 100 * 0.5.
Not sure if this is a proper fix or not but regardless by table is drew in the exact same place on both laptop and phone:
public static float pixelToDP(float dp){
if(Gdx.graphics.getDensity() < 1)
return dp + dp * Gdx.graphics.getDensity();
return dp * Gdx.graphics.getDensity();
Is this just a cheap fix or is this pretty much how it should be done?
Usage of density independent pixels implies that the physical size of the table on all screens should be same. Since your laptop screen is (physically) much bigger, you would see the table to be lot smaller than expected.
I would suggest an alternative approach of placing objects in fractions of size. e.g. 30% of width or 45% of height.
To implement this, just assume a stage resolution and place objects as you like then change viewport in resize method such that you get full view.
Hope it helps.
For more,
https://code.google.com/p/libgdx-users/wiki/AspectRatio
The best approach for this is to manipulate the density based on the execution target.
So what I usually do is to store the density in a field in a singleton, and set it based on the scenario:
public class Game {
public static float density;
public static initDensity(){
if (GDX.app.getTarget() == 0){
density = 2.0f;
}else {
density = GDX.graphics.getDensity();
}
}
public float toPixel(float dip){
return dip * density;
}
}
with this approach you can "simulate" a more dense screen then you actually have, and by using properties in your run config like -Ddensity=2 and System.getPropery("density") you can vary the screens you like to simulate.
One approach is having a fixed viewport size. Create your camera for example 1366x768 and place all your objects using that coordinate. Then the camera will fill the whole screen of every other resolution.
cam = new OrthographicCamera(1366, 768);
try seeing few tutorials....I personally think it is best to deal with pixels and using the camera will help you a lot, check this link once
getting different screen resolutions

Categories

Resources