Pixels to dips, desktop and android - java

I am using Libgdx to code an android game and as you may know, many screen resolutions cause some problems if done incorrectly. So I am trying to use this DP unit rather than pixels.
However, I have this method here:
public static float pixelToDP(float dp){
return dp * Gdx.graphics.getDensity();
}
the Gdx.graphics.getDensity() method actually gets the SCALE, so it's already done for me.
Now the problem, libgdx is cross platform which is good for testing. When I launch this on my S4 which has a resolution of 1920x1080 with a dpi of a whopping 480, opposed to my terrible and overpriced laptop which has 1366x768 # 92dpi it is placed exactly where I want it. On desktop it is way off, a good few hundred pixels on the X and Y axis.
Is this due to the fact my screen dpi is measured #92dpi, the resolution is a lot lower and the actual game is not fullscreen on the desktop?
Here is the code for drawing the object:
table.setPosition(MathHelper.pixelToDP(150), MathHelper.pixelToDP(200));
In order to get it perfect on desktop I have to do:
table.setPosition(MathHelper.pixelToDP(480), MathHelper.pixelToDP(700));
Which is not even visible on my phone, since the scale is actually 3x, which puts it a good 200 pixels off the screen on the Y axis.
Is there a way around this? Or am I basically going to have to deal with doing platform checks and different blocks of code?
Possible solution:
So I changed my dp conversion method, if I was to do 100 * 0.5 it would return a new value of 50 but in reality I want the orignal value of 100 + 100 * 0.5.
Not sure if this is a proper fix or not but regardless by table is drew in the exact same place on both laptop and phone:
public static float pixelToDP(float dp){
if(Gdx.graphics.getDensity() < 1)
return dp + dp * Gdx.graphics.getDensity();
return dp * Gdx.graphics.getDensity();
Is this just a cheap fix or is this pretty much how it should be done?

Usage of density independent pixels implies that the physical size of the table on all screens should be same. Since your laptop screen is (physically) much bigger, you would see the table to be lot smaller than expected.
I would suggest an alternative approach of placing objects in fractions of size. e.g. 30% of width or 45% of height.
To implement this, just assume a stage resolution and place objects as you like then change viewport in resize method such that you get full view.
Hope it helps.
For more,
https://code.google.com/p/libgdx-users/wiki/AspectRatio

The best approach for this is to manipulate the density based on the execution target.
So what I usually do is to store the density in a field in a singleton, and set it based on the scenario:
public class Game {
public static float density;
public static initDensity(){
if (GDX.app.getTarget() == 0){
density = 2.0f;
}else {
density = GDX.graphics.getDensity();
}
}
public float toPixel(float dip){
return dip * density;
}
}
with this approach you can "simulate" a more dense screen then you actually have, and by using properties in your run config like -Ddensity=2 and System.getPropery("density") you can vary the screens you like to simulate.

One approach is having a fixed viewport size. Create your camera for example 1366x768 and place all your objects using that coordinate. Then the camera will fill the whole screen of every other resolution.
cam = new OrthographicCamera(1366, 768);

try seeing few tutorials....I personally think it is best to deal with pixels and using the camera will help you a lot, check this link once
getting different screen resolutions

Related

How can I best implement fog of war with Java?

Hello I am an inexperienced programmer and this is my first question on Stack Overflow!
I am attempting to implement 'fog of war' in my Java game. This means most of my map begins off black and then as one of my characters moves around parts of the map will be revealed. I have searched around including here and found a few suggestions and tried tweaking them myself. Each of my approaches works, however I run into significant runtime issues with each. For comparison, before any of my fog of war attempts I was getting 250-300 FPS.
Here is my basic approach:
Render my background and all objects on my JPanel
Create a black BufferedImage (fogofwarBI)
Work out which areas of my map need to be visible
Set the relevant pixels on my fogofwarBI to be fully transparent
Render my fogofwarBI, thus covering parts of the screen with black and in transparent sections allowing the background and objects to be seen.
For initialising the buffered image I have done the following in my FogOfWar() class:
private BufferedImage blackBI = loader.loadImage("/map_black_2160x1620.png");
private BufferedImage fogofwarBI = new BufferedImage(blackBI.getWidth(), blackBI.getHeight(), BufferedImage.TYPE_INT_ARGB);
public FogOfWar() {
fogofwarBI.getGraphics().drawImage(blackBI,0,0,null);
}
In each of my attempts I start the character in a middle of 'visible' terrain, ie. in a section of my map which has no fog (where my fogofwarBI will have fully transparent pixels).
Attempt 1: setRGB
First I find the 'new' coordinates in my character's field of vision if it has moved. ie. not every pixel within the character's range of sight, but just the pixels at the edge of his range of vision in the direction he is moving. This is done with a for loop, and will go through up to 400 or so pixels.
I feed each of these x and y coordinates into my FogOfWar class.
I check if these x,y coordinates are already visible (in which case I don't bother doing anything to them to save time). I do this check by maintaining a Set of Lists. Where each List contains two elements: an x and y value. And the Set is a unique set of the coordinate Lists. The Set begins empty, and I will add x,y coordinates to represent transparent pixels. I use the Set to keep the collection unique and because I understand the List.contains function is a fast way of doing this check. And I store the coordinates in a List to avoid mixing up x and y.
If a given x,y position on my fogofwarBI is not currently visible I add set the RBG to be transparent using .setRGB, and add it to my transparentPoints Set so that coordinate will not be edited again in future.
Set<List<Integer>> transparentPoints = new HashSet<List<Integer>>();
public void editFog(int x, int y) {
if (transparentPoints.contains(Arrays.asList(x,y)) == false){
fogofwarBI.setRGB(x,y,0); // 0 is transparent in ARGB
transparentPoints.add(Arrays.asList(x,y));
}
}
I then render it using
public void render(Graphics g, Camera camera) {
g.drawImage(fogofwarBI, 0, 0, Game.v_WIDTH, Game.v_HEIGHT,
camera.getX()-Game.v_WIDTH/2, camera.getY()-Game.v_HEIGHT/2,
camera.getX()+Game.v_WIDTH/2, camera.getY()+Game.v_HEIGHT/2, null);
}
Where I am basically applying the correct part of my fogofwarBI to my JPanel (800*600) based on where my game camera is.
Results:
Works correctly.
FPS of 20-30 when moving through fog, otherwise normal (250-300).
This method is slow due to the .setRGB function, being run up to 400 times each time my game 'ticks'.
Attempt 2: Raster
In this attempt I create a raster of my fogofwarBI to play with the pixels directly in an array format.
private BufferedImage blackBI = loader.loadImage("/map_black_2160x1620.png");
private BufferedImage fogofwarBI = new BufferedImage(blackBI.getWidth(), blackBI.getHeight(), BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = fogofwarBI.getRaster();
DataBufferInt dataBuffer = (DataBufferInt)raster.getDataBuffer();
int[] pixels = dataBuffer.getData();
public FogOfWar() {
fogofwarBI.getGraphics().drawImage(blackBI,0,0,null);
}
My editFog method then looks like this:
public void editFog(int x, int y) {
if (transparentPoints.contains(Arrays.asList(x,y)) == false){
pixels[(x)+((y)*Game.m_WIDTH)] = 0; // 0 is transparent in ARGB
transparentPoints.add(Arrays.asList(x,y));
}
}
My understanding is that the raster is in (constant?) communication with the pixels array, and so I render the BI in the same way as in attempt 1.
Results:
Works correctly.
A constant FPS of around 15.
I believe it is constantly this slow (regardless of whether my character is moving through fog or not) because whilst manipulating the pixels array is quick, the raster is constantly working.
Attempt 3: Smaller Raster
This is a variation on attempt 2.
I read somewhere that constantly resizing a BufferedImage using the 10 input version of .drawImage is slow. I also thought that having a raster for a 2160*1620 BufferedImage might be slow.
Therefore I tried having my 'fog layer' only equal to the size of my view (800*600), and updating every pixel using a for loop, based on whether the current pixel should be black or visible from my standard transparentPoints Set and based on my camera position.
So now my editFog Class just updates the Set of invisible pixels and my render class looks like this:
public void render(Graphics g, Camera camera) {
int xOffset = camera.getX() - Game.v_WIDTH/2;
int yOffset = camera.getY() - Game.v_HEIGHT/2;
for (int i = 0; i<Game.v_WIDTH; i++) {
for (int j = 0; j<Game.v_HEIGHT; j++) {
if ( transparentPoints.contains(Arrays.asList(i+xOffset,j+yOffset)) ) {
pixels[i+j*Game.v_WIDTH] = 0;
} else {
pixels[i+j*Game.v_WIDTH] = myBlackARGB;
}
}
}
g.drawImage(fogofwarBI, 0, 0, null);
}
So I am no longer resizing my fogofwarBI on the fly, but I am updating every single pixel every time.
Result:
Works correctly.
FPS: Constantly 1 FPS - worst result yet!
I guess that any savings of not resizing my fogofwarBI and having it smaller are massively outweighed by updating 800*600 pixels in the raster rather than around 400.
I have run out of ideas and none of my internet searching is getting me any further in trying to do this in a better way. I think there must be a way to do fog of war effectively, but perhaps I am not yet familiar enough with Java or the available tools.
And pointers as to whether my current attempts could be improved or whether I should be trying something else altogether would be very much appreciated.
Thanks!
This is a good question. I am not familar with the awt/swing type rendering, so I can only try to explain a possible solution for the problem.
From a performance standpoint I think it is a better choice to chunk/raster the FOW in bigger sections of the map rather than using a pixelbased system. That will reduce the amount of checks per tick and updating it will also take less resources, as only a small portion of the window/map needs to update. The larger the grid, the less checks, but there is a visual penalty the bigger you go.
Leaving it like that would make the FOW look blocky/pixelated, but its not something you can't fix.
For the direct surrounding of a player, you can add a circle texture with the player at its center. You can than use blending (I believe the term in awt/swing is composite) to 'override' the alpha where the circle overlaps the FOW texture. This way the pixel-based updating is done by the renderingAPI which usually uses hardware enhanced methods to achieve these things. (for custom pixel-based rendering, something like 'shader scripts' are often used if supported by the rendering API)
This is enough if you only need temporary vission in the FOW (if you don't need to 'remember' the map), you don't even need a texture grid for the FOW than, but I suspect you do want to 'remember' the map. So in that case:
The blocky/pixelated look can be fixed like they do with grid-based terain. Basically add a small additional textures/shapes based on the surroundings to make things look nice. The link below provides good examples and a detailed explanation on how to do the 'terrain-transitions' as they are called.
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/tilemap-based-game-techniques-handling-terrai-r934/
I hope this gives a better result. If you cannot get a better result, I would advise switching over to something like OpenGL for the render engine as it is meant for games, while the awt/swing API is primarely used for UI/application rendering.

Face Detector Mobile Vision speed not increased with smaller Bitmap

Summary:
Our app depends on a high detection speed of facial landmarks (= like eyes open or closed). Thus I developed an algorythm that takes the position of the face from the last frame and crops the image from the next frame. This works perfectly and the Face Detector only has to process a quarter of the image.
But it does not increase the detection speed. Does anybody know why?
Edit: All that my algorythm is doing is croping the image based on the information from the last image. But it does not perform the ImageRecognition itself. We are using Mobile Vision from Google.
important Code snippets:
This snippped is executed before passing the bitmap to the Face Detector. It takes the face position from the previous frame and only passes this part of the image:
Bitmap bitmapReturn = Bitmap.createBitmap(bitmap, topLeftX, topLeftY, width, height);
This snippet is executed after the frame is processed by the Face Detector. It porviedes the location of the image for the next frame:
float spotY = getSpotY(face);
float spotX = getRatioX(face);
int moveX = (int) (((float)bitMapScanWidth / 2) - spotX) ;
int moveY = (int) (((float)bitMapScanHeight / 2) - spotY);
moveValues(moveX, moveY);
There are some further code snippets that make sure the image values topLeftX and topLeftY don't reach values beyond the bitmap size and others that make sure the face has the same size on the image.
But as said before. The algorythm works fine, but doesn't lead to anymore speed. I can't figure out why, because it should massively reduce the required computation time. Can anybody explain me why this is not the case? Do I have to adjust something? Or is there another way, to increase speed in my algorythm?
Note that when I compared the speed between the two versions (With the algorythm that crops the image and without it) both versions actually calculated through the required functions to crop the image. The only difference was that one of them actually used the values to crop the image and the other one just calculated them in the background. This means, that the computation required for my algorythm was not the reason for the missing speed improvement.
If you are building your own algorithm for facial recognition, you can try to change the actual algorithm and use an architecture that is suitable for mobile devices. like MobilNetSSD or such, also you can try and change how you compile your algorithm and deploy it on mobile because both those techniques can boost the performance beyond what a simple cropping function can do.
further, if you don't have any problem sharing the actual algorithm you are using I will do my best to see why cropping doesn't work for your specific case.

sizing images with OrthographicCamera

I am new to libgdx and this question might be obvious since they skip it in every tutorial.
But say I set a camera up like this:
cam = new OrthographicCamera(100, 100);
This means I will now be working with my own units instead of pixels. So how do I know what size to make an image? Say for example I want an image to fill the width of the camera and half of the height. How would I do this? Do I make the image 100x50px? that makes no sense to me.
You say you are working with your own units when you define your camera, yet you are still thinking with pixels when you ask whether you should make your image 100x50px.
Since you are working with your own units, I would assume that they are not completely detached from original pixel units, meaning that everything should now be measured by your units including the size of the images.
If you can calculate what 1 unit of your represents pixel-wise, you can then determine the scale to which you can scale all of your images.
Then you can say that your image should be 100x50units in size, you don't need to make the image to fit this condition, you just need to adjust its scale so that it corresponds with your defined unit measurement.
If you are using SpriteBatch to draw your images, you might find that a couple of the defined draw overloads documented in the API can be given scale for both X and Y and could prove to be useful in this scenario.

Game Dev - Working with Screen Coordinates vs World Coordinates

A friend and myself are new to game development, and we had a discussion regarding World Coordinates and Screen Coordinates.
We are following a wonderful online tutorial series for libGDX and they are using a 100 PPM (pixels per meter) scaling factor. If you re-size the screen, the scaling of objects no longer works. My friend is convinced that it is not a problem, and he may be right. But, I'm under the impression that when developing a game, the developers should typically only work with the pre-defined world coordinate system and let the camera transform it to the chosen screen coordinates. I do understand the need for reverse transformations when using mouseclicks, etc. But, the placing and scaling of objects in the world space is my concern.
I would like to reach out to this community for some professional feedback.
Thats one of the bigest problem of almost all Libgdx tutorials. They are great, but the pixel to meter/units conversation is just wrong.
Libgdx offers a great solution for that with Camera and an even better solution with the new Viewport classes (which under the hood work with Camera).
Its is really simple and will solve the problem of different screen sizes/aspect rations.
Just choose a Virtual_Width and Virtual_Height (think about it in meters or similar units).
For exampl, you have humans fighting each other in a 2D platformer game. LEts say our humans are 2m tall, so think about, how much screenspace should one human use? If we say, a human should take 1/10 of the screen space, our virtual height is 10*2=20. Now think about the primary aspect ration you are targeting. Lets say it is 16/9, so you have a virtual width of about 35.
Next, you need to think about what kind of Viewport you want. You sure want to use a Viewport, which supports Virtual_Width and Virtual_Heigth.
You may want a Viewport, which keeps the aspect ratio and fills the rest of the screen (if the screen has different aspect ratio) with black bars (FitViewport) or you may want the Viewport to fill the whole screen by stretching the units (StretchViewport).
Now just create the Viewport with your virtual width and heigth and update it in the resize() method with the given width and height.
Hope it helps.
It's be better name as Units per meter
And when you resize your screen you just set a new projective matrix, so everything works fine )
What you should worry about it's a aspect ratio.
Everything rest is doesn't matter.
So answering your question - Stay with world coordinates.
It also make simple add physics, light calculations, any dimensions ( 1.8 units instead 243 pixels )

Java3d. How to increase range of sight?

If I create a simple application where I can fly over a plain I can only see a little of the plain. The engine only renders in a certain radius around the camera. Everything that's beyond appears in the background colour. So it feels like being in a fog where my range of sight is only a couple of meters.
How do I increase that range of sight?
javax.media.j3d.View.setFrontClipDistance(double distance)
More data found here:
http://download.java.net/media/java3d/javadoc/1.3.2/javax/media/j3d/View.html
Sorry if this seems a bit late but I want to clarify for future reference the best answer is not exactly correct.
setFrontClipDistance Is the point that something un-renders as you get close to it, by default
this value is .01(meters) as you do not want something to un-render when you are 10 meters from it, well at least in most cases.
What is truly being asked is the how to increase the Maximum render distance and that is done with setBackClipDistance, default set to 10(meters). If you set it to 1000 then that would increase the maximum render distance to 1000 scale meters.
The proper way to set this, assuming you are using a simpleUnivers object, is to access the function in the View of the instanced object.
//Create a Simple Universe object using a 3d canvas object you have
SimpleUniverse simpleU = new SimpleUniverse(Your3dCanvasHere);
//add in your compiled branch group
simpleU.addBranchGraph(YourBranchGroupHere);
//Increase the render distance with setBackClipDistance
simpleU.getViewer().getView().setBackClipDistance(1000);
If you are planning to develop something serious, you shouldn't stick to Java-3D. Try to use OpenGL. OpenGL comes with a function:
gluPerspective(fieldOfViewY, aspect, near, far);
The far parameter is what you are looking for. OpenGL is way more efficient than a CPU based drawing engine, because it uses the GPU.

Categories

Resources