I am developing a 2D platformer game for the android platform, so I don't really care about the screen DPI, but much more about the actual resolution in pixels. From what I've gathered on the net, there are a couple of different resolutions (and aspect ratios) present. According to my search, the two resolutions that are currently widespread are 480x320 (1.5) and 800x480 (1.666), is that right? I'd like to target these two resolutions to reach most customers.
Now, I can deal with the different aspect ratios by showing a black border of 40 pixel for the bigger display, essentialy reducing it to 720x480 pixel and a ratio of 1.5.
The problem with my game is that it is essential for gameplay that the players see the same amount of the world on each screen. Otherwise, some players would get an unfair advantage. Furthermore, I trigger some events depending on the visibility. For example, an enemy is only allowed to start shooting when the player starts seeing it. Otherwise, the enemies' bullets would seem to come from nowhere.
So I figured I need to either create my graphics for one resolution and scale them for the other. Or I create separate graphics for each resolution. Is that right? Unfortunately, both ways are suboptimal for pixel graphics.
On another note: How can I restrict my game to these resolutions only (especially for the Android Market)? I know about the "supports-screens" tag in the manifest, but that works depending on the effective screen-size, not the size in pixel, or am I mistaken?
I am also interested in personal experiences of other android game developers when it comes to resolution independence.
Thanks!
My question would be: what do you think would do on a PC? For game development, Android should be looked at much more like a PC target than a console. You just intrinsically need to accept that there will be some diversity of screens that you can't totally predict up-front.
So I think there are two main approaches to take:
(1) Use a constant "display size" as if you were setting a fixed video resolution on the PC and letting the user's monitor deal with it. On these devices of course there is no monitor, just one fixed display, so it doesn't make sense to modify the core resolution. Instead, you can set up the SurfaceView showing your game to have a fixed resolution, and let the platform's compositor take care of scaling it (in hardware) as it composites to the screen.
(2) More intelligently adjust to the actual resolution of the screen you find yourself running in. Scale up or down graphics yourself to create the playing area you want. Maybe have some different sizes of textures and select the appropriate ones for the screen resolution.
You could probably also do a combination of these, where you have a couple fixed sizes you pick for the surface view depending on the total resolution available which the game can run well with. In either case, you can do letter-boxing as appropriate to keep your aspect ratio constant on different screens, if that is what you want.
There are three approaches to differences in aspect ratio:
Show opaque borders on some ratios ("letterboxing").
Show more of the game world on some ratios.
Don't work at all on some ratios.
With approach (1) you waste screen space on some devices. Not such a big deal for televisions, but miserable on handheld devices where screen space is limited. With approach (2) players on some devices get advantages (they can see more of the world) and disadvantages (sprites are smaller, so touch precision is harder). Approach (3) just sucks.
Obviously it depends on the details of your game which is better, but as a player I much prefer approach (2). The constituency who care if players on other devices get a bit of a hypothetical advantage is pretty small compared to the constituency who care if their screen is partly obscured by unnecessary black bars.
(Similar approaches and remarks apply to differences in resolution.)
Related
I am using Java with OpenCV Library to detect Face,Eyes and Mouth using Laptop Camera.
What I have done so far:
Capture Video Frames using VideoCapture object.
Detect Face using Haar-Cascades.
Divide the Face region into Top Region and Bottom Region.
Search for Eyes inside Top region.
Search for Mouth inside Bottom region.
Problem I am facing:
At first Video is running normally and suddenly it becomes slower.
Main Questions:
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Do I have to capture Video Frames in a certain scale? for example (100px X100px)?
Do Haar-Cascades work better in Gray-scale Images?
Does different lighting conditions make difference?
What does the method detectMultiScale(params) exactly do?
If I want to go for further analysis for Eye Blinking, Eye Closure Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
Your help is appreciated!
The following article, would give you an overview of the things going under the hood, I would highly recommend to read the article.
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Not necessarily, the cascade.detectMultiScale has params to adjust for various input width, height scenarios, like minSize and maxSize, These are optional params However, But you can tweak these to get robust predictions if you have control over the input image size. If you set the minSize to smaller value and ignore maxSize then it will work for smaller and high res images as well, but the performance would suffer. Also if you imagine now, How come there is no differnce between High-res and low-res images then you should consider that the cascade.detectMultiScale internally scales the images to lower resolutions for performance boost, that is why defining the maxSize and minSize is important to avoid any unnecessary iterations.
Do I have to capture Video Frames in a certain scale? for example
(100px X100px)
This mainly depends upon the params you pass to the cascade.detectMultiScale. Personally I guess that 100 x 100 would be too small for smaller face detection in the frame as some features would be completely lost while resizing the frame to smaller dimensions, and the cascade.detectMultiScale is highly dependent upon the gradients or features in the input image.
But if the input frame only has face as a major part, and there are no other smaller faces dangling behind then you may use 100 X 100. I have tested some sample faces of size 100 x 100 and it worked pretty well. And if this is not the case then 300 - 400 px width should work good. However you would need to tune the params in order to achieve accuracy.
Do Haar-Cascades work better in Gray-scale Images?
They work only in gray-scale images.
In the article, if you read the first part, you will come to know that it face detection is comprised of detecting many binary patterns in the image, This basically comes from the ViolaJones, paper which is the basic of this algorithm.
Does different lighting conditions make difference?
May be in some cases, largely Haar-features are lighting invariant.
If you are considering different lighting conditions as taking images under green or red light, then it may not affect the detection, The haar-features (since dependent on gray-scale) are independent of the RGB color of input image. The detection mainly depends upon the gradients/features in the input image. So as far as there are enough gradient differences in the input image such as eye-brow has lower intensity than fore-head, etc. it will work fine.
But consider a case when input image has back-light or very low ambient light, In that case it may be possible that some prominent features are not found, which may result in face not detected.
What does the method detectMultiScale(params) exactly do?
I guess, if you have read the article, by this time, then you must be knowing it well.
If I want to go for further analysis for Eye Blinking, Eye Closure
Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect
Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
No, I won't suggest you to perform these type of gesture detection with SVM, as it would be extremely slow to run 10 different cascades to conclude current facial state, However I would recommend you to use some Facial Landmark Detection Framework, such as Dlib, You may search for some other frameworks as well, because the model size of dlib is nearly 100MB and it may not suit your needs i f you want to port it to mobile device. So the key is ** Facial Landmark Detection **, once you get the full face labelled, you can draw conclusions like if the mouth if open or the eyes are blinking, and it works in Real-time, so your video processing won't suffer much.
I'm creating a UI system for an android game that will have a large (up to 4096x4096) background area in which menus can be placed anywhere within that screen and a camera will fly to that location when a different menu is needed. Instead of having a large static image, I'd like to be able to animate this slightly. What I'd like to know is how to do this efficiently without lagging up the device. These are the methods I've come up with so far, but maybe there is something better..
1) Have 3 separate 4096x4096 static layers for the background, 1 is the sky, one is the terrain, one is things like clouds and trees. Each layer is placed on top of each other with a slight difference in Z space to give a little parallax effect when the camera moves.
2) Have a large stationary background image, with a layer on top of that with individual specific sprites of clouds, trees and other things that should be animated. I think this might be the most efficient route, as I can choose not to animate parts that are not in view, but it will also limit re-usability as every different object will have to be placed manually in space. My goal is to be able to simply change the assets and be able to have a whole new game.
3) Have 1 large background layer with several frames that plays almost like a video. I feel like this will be the worst on performance(loading several 4096x4096 frames and drawing a different one 30 times a second), but would give me the scene exactly how I want it directly out of After Effects. I doubt this one is even feasible, not just because of the drawing but storage space on android devices just for the menu UI wouldn't allow for several 6MB frames.
Are any of these in the right direction? I have seen a few similar questions asked but none fit close enough to what I needed(A large, moving background that isn't made of tiles).
Any help is appreciated.
As far as your question is tagged for Android, I would recommend the 2nd solution.
The main reason is that solution #1 and #3 involve loading numerous 4096x4096 textures.
Quick calcultation: three 32bit textures with such resolution would use at least 200MB of Video RAM. It means that you can immediatly discard a lot of android devices.
On the other hand, the solution #2 would involve only two big textures: a large stationary background image, and a texture atlas containing specific sprites of clouds, trees...
This solution is really more memory friendly, and will lead to the same aestetic output.
TL;DR: the 3 solutions would work great but only the #2 would fit an embedded device
I am building a 2D top-down tile based game in Java. Naturally you can pan around and zoom in on the game, currently zooming in on 10 different levels, where each tile ranges 10x10 pixels to 100x100 pixels appropriately. Currently, the the tiles for each zoom level are stored in separate sprite sheets, read in at the startup of the program and stored in a buffered image array. I am sure this can't be the best way to go about this.
I am looking for any tips to enhance efficiency for the long-term, would it be better to have the 100x100 tiles only and scale them dynamically in java; somehow use vector graphics in java (I'm sure how, but I'm sure google could help me) or what?
Many thanks!
I'd go dynamic.
Normally in computer graphics you use matrices that, applied to the graphics context, modify everything you draw on it.
This is used to modify position, scale, rotation, etc. Rather than subtract the camera position to every tile, you apply the translation once to the graphics context, and then you draw your tiles in world position. The graphics context will take care of placing the tiles in the correct screen space.
I suggest you the following reads:
http://docs.oracle.com/javase/tutorial/2d/advanced/transforming.html
http://www.javalobby.org/java/forums/t19387.html
If you're doing fixed zooming (i.e. each zoom level is a fixed distance from the camer), as opposed to fluid zooming (the player can zoom in by 3.3x, 7.5x, and not just 1x, 2x, 3x, etc.) then it's massively wasteful to try to solve this by simply applying a zoom transform. It's tempting because that's the least complicated approach, and it's easy to understand from an implementation standpoint, but that means that at maximum zoom-out, you're going to be rendering an area that's 10x larger in the X direction, and 10x larger in the Y direction - so the area of the world that you have to render is 100x larger than at maximum zoom-in. I also doubt that you'll like the way your textures get squished by the hardware as you're zooming out. Computer graphics isn't the same as optics - subpixel rendering, and other things that happen in computer graphics aren't going to make your textures look very good if you hand that task off the the software/hardware.
Even if you do fluid zooming, I would still do level-of-detail textures, and dynamically swap them out depending on the distance between the world being rendered, and the camera.
Also, 10 zoom levels? Are you sure you really need 10 zoom levels? Zoom is usually used in 2D games to allow you to perform different activities at different levels of detail because a particular zoom level is especially well suited for a certain set of activities. I don't remember any 2D game that needed 10 zoom levels to accomplish this. 3-5 is the most I've ever seen, and I've never felt that it wasn't enough. It also seems like a lot of art work to produce the images at every zoom level for 10 zoom levels.
You're also likely going to find that applying an AffineTransform sounds like a good idea, but that it's extremely computationally expensive, and if you need 60fps performance, you're highly unlikely to achieve it this way. Don't take my word for it though, go try it and see how badly it falls over on itself.
I am creating a Java game for Windows and I have come across a problem: there are lots of different screens and resolutions when it comes to Windows. What would be the best way to make it so that it looks just about the same on all screens?
You basically have three options:
Fix the size of the game window to something small that will fit n all screens (800*600 maybe)? This is easy to do, but could annoy users with big screens.....
Make the game resolution-independent, so that that it is rendered to a scale to fit the current window size. This is how most FPS games work for example. The main downside of this is that you need to do some extra scaling maths in your code and there may be some runtime overhead for rescaling images etc.
Make the game screen dynamically resizable, so that the components within it rearrange and resize themselves to fit the available space (like with a web page). This is the hardest to implement as you have to make use of appropriate layout managers and test lots of different combinations, but can give the nicest user "experience". I've successfully used MigLayout to do this in the past with a Swing game.
Any of these options could be best for you depending on the circumstances. It will probably depend mainly on the type/design of your game and your willingness to spend time on making the more complicated methods work well.
I'm getting into app/game development for android and I just wanted to know how hard it is to make your games work with all phones. or do the phones just scale the app to fit with the screens? thanks for any help
Screen resolution isn't as big a problem as the differences in screen ratios and defining things like touch-area sizes.
The commonest devices that my games and apps are running on have the following sizes...
320x480 (4x6)
480x800 (3x5)
480x854 (it defies belief to try to give a ratio to that nonsense)
I use AndEngine and libgdx - both will scale automatically BUT I have to choose a ratio to work with and it will crop (with black bars rather than lost content) on devices which don't share that ratio (for reference I choose to crop lower-resolutions as I think people with nicer screens would complain sooner!!)
Actual physical screen sizes vary too - and you have to bear-in-mind that a box which may seem big enough to hit on a 4.3" high-density phone could be near-impossible to hit on a smaller/lower-density device...
Both of those things are far more worrying than scaling...
You should remember that screen resolution is only one of many factors deciding about game compatibility.
I think this video is a good for start
http://www.google.com/events/io/2011/sessions/building-aggressively-compatible-android-games.html