As far as I know, you can get the screen size and density using Gdx.graphics.getDensity(), so you can load the right texture for E.g 1x, 1.5x etc..
but what about the texture that comes with the 3D model, for E.g. the texture is only intended for a maximum 1280x800px, while my android has dpi 3x.
I don't want to scale it too much because it can cause the image become too blur/fade/not sharp, anyone who knows the solution please?
EDIT:
let me explain in detail
I've one ModelInstance, texture atlas (2048x2048px) attached.
When the games is opened in 4k screen, I widen the scale of the model almost three times, causing the texture to become blurry, that makes sense because from 240dpi to 640dpi the difference is very far.
so in my opinion the solution is to make some textures atlas for 240dpi, 320dpi, 480dpi etc. the problem is I don't know how to replace the texture atlas which from the beginning already integrated with the Model? so when scaling up, texture atlas is automatically replaced with a higher one. thanks
Usually in 3D graphics the camera or model are mobile. There isn't a fixed best resolution of a texture because the camera may be very near, very far away or viewing the textured surface at a glancing angle.
The solution offered by graphics APIs are settings for texture filtering. Under magnification (where a texel takes up more than a screen pixel) you can do linear filtering for soft edges or point filtering for hard edges. Minification is more complex, you can have linear or point filtering, but you can also have mipmaps, which are a precalculated chain of successively half-sized versions of your image, typically all the way down to 1x1. You can set texture filtering to pick the nearest mipmap, blend between mipmaps, or use anisotropic filtering for better sharpness at glancing angles. Generally linear filtering for magnification, and full mipmap chains with anisotropic filtering for minification produces very good quality and good enough performance to be a good default choice.
So, you won't be giving the GPU a single texture for your model, you'll be giving it a chain of textures, and letting the GPU worry about how to sample that chain to give the correct amount of blur/sharpness. For performance and compatibility with mipmaps, it is usually a good idea to use power-of-two textures (e.g. 1024x1024 rather than 1280x800).
So, just make a 1024x1024 or 2048x2048 texture with mipmaps and appropriate filtering settings then use it on every device regardless of its resolution then quality-wise you're sorted.
If you're particularly worried about memory use or load times then there's an argument to reduce the texture size on lower resolution devices (basically, have a second asset with halved resolution for low-res devices, or just skip the highest resolution mip when loading for low-res devices), but I think that might be a premature optimization at this stage.
Related
I am using Java with OpenCV Library to detect Face,Eyes and Mouth using Laptop Camera.
What I have done so far:
Capture Video Frames using VideoCapture object.
Detect Face using Haar-Cascades.
Divide the Face region into Top Region and Bottom Region.
Search for Eyes inside Top region.
Search for Mouth inside Bottom region.
Problem I am facing:
At first Video is running normally and suddenly it becomes slower.
Main Questions:
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Do I have to capture Video Frames in a certain scale? for example (100px X100px)?
Do Haar-Cascades work better in Gray-scale Images?
Does different lighting conditions make difference?
What does the method detectMultiScale(params) exactly do?
If I want to go for further analysis for Eye Blinking, Eye Closure Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
Your help is appreciated!
The following article, would give you an overview of the things going under the hood, I would highly recommend to read the article.
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Not necessarily, the cascade.detectMultiScale has params to adjust for various input width, height scenarios, like minSize and maxSize, These are optional params However, But you can tweak these to get robust predictions if you have control over the input image size. If you set the minSize to smaller value and ignore maxSize then it will work for smaller and high res images as well, but the performance would suffer. Also if you imagine now, How come there is no differnce between High-res and low-res images then you should consider that the cascade.detectMultiScale internally scales the images to lower resolutions for performance boost, that is why defining the maxSize and minSize is important to avoid any unnecessary iterations.
Do I have to capture Video Frames in a certain scale? for example
(100px X100px)
This mainly depends upon the params you pass to the cascade.detectMultiScale. Personally I guess that 100 x 100 would be too small for smaller face detection in the frame as some features would be completely lost while resizing the frame to smaller dimensions, and the cascade.detectMultiScale is highly dependent upon the gradients or features in the input image.
But if the input frame only has face as a major part, and there are no other smaller faces dangling behind then you may use 100 X 100. I have tested some sample faces of size 100 x 100 and it worked pretty well. And if this is not the case then 300 - 400 px width should work good. However you would need to tune the params in order to achieve accuracy.
Do Haar-Cascades work better in Gray-scale Images?
They work only in gray-scale images.
In the article, if you read the first part, you will come to know that it face detection is comprised of detecting many binary patterns in the image, This basically comes from the ViolaJones, paper which is the basic of this algorithm.
Does different lighting conditions make difference?
May be in some cases, largely Haar-features are lighting invariant.
If you are considering different lighting conditions as taking images under green or red light, then it may not affect the detection, The haar-features (since dependent on gray-scale) are independent of the RGB color of input image. The detection mainly depends upon the gradients/features in the input image. So as far as there are enough gradient differences in the input image such as eye-brow has lower intensity than fore-head, etc. it will work fine.
But consider a case when input image has back-light or very low ambient light, In that case it may be possible that some prominent features are not found, which may result in face not detected.
What does the method detectMultiScale(params) exactly do?
I guess, if you have read the article, by this time, then you must be knowing it well.
If I want to go for further analysis for Eye Blinking, Eye Closure
Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect
Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
No, I won't suggest you to perform these type of gesture detection with SVM, as it would be extremely slow to run 10 different cascades to conclude current facial state, However I would recommend you to use some Facial Landmark Detection Framework, such as Dlib, You may search for some other frameworks as well, because the model size of dlib is nearly 100MB and it may not suit your needs i f you want to port it to mobile device. So the key is ** Facial Landmark Detection **, once you get the full face labelled, you can draw conclusions like if the mouth if open or the eyes are blinking, and it works in Real-time, so your video processing won't suffer much.
I'm programming an Android App where a grid is drawn, which you can move around and move in it's direction. The grid consists of about 2000 to 5000 quads, each with a different texture. I defined 4 vertices and use an index buffer to draw each quad. Before drawing I position it using a model matrix. As you can move in my scene I use view frustum culling, which increases the performance in some situations. Unfortunately there might be the case where I will need to draw all of the quads, so I want to ask how I prevent slow drawing.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024). I think calling glDrawElements() for each squad is what makes me slow, but I don't know how I can change it.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
I look forward for any kind of help.
I can't use a texture atlas as all of the textures are pretty big (from 256x256 to 1024x1024).
You can fit 64 256x256 textures into a 2048x2048 atlas, that's a huge amount, so you should definitely atlas. Even getting 4 1024x1024 onto a 2048x2048 is worth doing, it can quarter your draw call count.
And as WLGfx says in the comments to your question, you should batch up any quads that use the same texture (with atlasing there will be a lot more of these).
I think this would be enough, but you still might have a pretty high drawcall count in your fully zoomed-out view. After implementing atlasing and batching, if performance here is still a problem, you could create a separate asset set of thumb-nail textures at, say, quarter resolution (so a 256x256 becomes 64x64). This thumbnail asset set would fit onto just a handful of 2048x2048 atlas sheets, and you could switch to it when zoomed out far enough.
Another idea I had would be to draw the scene to a texture and just bind this texture to a single quad to create an illusion of the scene being drawn. As the user gets closer I could redraw it for better resolution. Could this work?
This could work, as long as your scene is very static, if the quads are moving/changing every frame, then it might not help. Also, there might be a noticeable framerate hitch when you have to do the full redraw.
It is my understanding that images should be power of 2 for optimization with the GPU. However, if I'm packing my textures into a sheet for libGDX, can the atlas be a power of 2 and NOT the actual regions? Or should the atlas sheet be a power of 2 and each Texture Region also be a power of 2.
I understand that it's not. As far as I know the reason for the need of POT (power of two) textures is for optimizations things like mipmapping, anisotropic filtering, etc. So when you pack many images in one texture then you upload that entire texture to the GPU, so the GPU can perform the desired optimizations for all the texture. The atlas is like an index to lookup for certain sectors (or "pieces") of your texture (texture regions), so you can retrieve the desired regions easily. Those regions don't need to be POW, because the GPU already perform optimizations for the entire set (the entire texture). I think that you should ask these type of questions in the game dev community, so you will have a deeper response.
I am making a game using LWJGL, and so far I decided to have 5 or 6 sprite sheets. It is like one for the blocks, one for the items, objects, etc. By doing that, I have sprite sheets with sprites that are closely related, and normally have the same size (in the case of blocks and items). But is this the better way? Or should I just throw everything on a single sprite sheet, with no organization whatsoever?
If I am doing it the right way, there is also another problem. For example, when you are in the map, I need to draw the blocks. But over the blocks, there can be electrical wires and other stuff - that are in a separated sprite sheet. This information, however, is stored in the same array. So normally I just iterate over it once, and each time, draw the block, and then the wire over it - switching sprite sheets twice each iteration. But I thought it might take some time to switch these, so maybe it would be more interesting to run the thing twice, first draw all the blocks, and then iterate again to draw the wires? To change the textures, I am using the SlickUtil Texture class, which has a bind method - really easy to use.
There is no "ideal"; there are simply the factors that matter for your needs.
Remember the reason why you use sprite sheets at all: because switching textures is too expensive to do per-object when dealing with 2D rendering. So as long as you're not switching textures for each sprite you render, you'll already be ahead of the game performance-wise.
The other considerations you need to take into account are:
Minimum user hardware specifications. Specifically, what is the minimum size of GL_MAX_TEXTURE_SIZE you want your code to work on? The larger your sprite sheets get, the greater your hardware requirements, since a single sprite sheet must be a texture.
This value is hardware-dependent, but there are some general requirements. OpenGL 3.3 requires 1024 at a minimum; pretty much every piece of GL 3.3 hardware gives 4096. OpenGL 4.3 requires a massive 16384, which is approaching the theoretical limits of floating-point texture coordinate capacity (assuming you want at least 8 bits of sub-pixel texture coordinate precision).
GL 2.1 has a minimum requirement of 64, but any actual 2.1 hardware people will have will offer between 512 and 2048. So pick your sprite sheet size based on this.
What you're rendering. You want to be able to render as much as possible from one sprite sheet. What you want to avoid is frequent texture switches. If your world is divided into layers, if you can fit your sprites for each layer onto their own sheet, you're doing fine. No hardware is going to choke on 20 texture changes; it's tens of thousands that are the problem.
The main thing is to render everything that uses a sheet all at once. Not necessarily in the same render call; you can switch meshes and shader uniforms/fixed-function state. But you shouldn't switch sheets between these renders until you've rendered everything needed for that sheet.
I am building a 2D top-down tile based game in Java. Naturally you can pan around and zoom in on the game, currently zooming in on 10 different levels, where each tile ranges 10x10 pixels to 100x100 pixels appropriately. Currently, the the tiles for each zoom level are stored in separate sprite sheets, read in at the startup of the program and stored in a buffered image array. I am sure this can't be the best way to go about this.
I am looking for any tips to enhance efficiency for the long-term, would it be better to have the 100x100 tiles only and scale them dynamically in java; somehow use vector graphics in java (I'm sure how, but I'm sure google could help me) or what?
Many thanks!
I'd go dynamic.
Normally in computer graphics you use matrices that, applied to the graphics context, modify everything you draw on it.
This is used to modify position, scale, rotation, etc. Rather than subtract the camera position to every tile, you apply the translation once to the graphics context, and then you draw your tiles in world position. The graphics context will take care of placing the tiles in the correct screen space.
I suggest you the following reads:
http://docs.oracle.com/javase/tutorial/2d/advanced/transforming.html
http://www.javalobby.org/java/forums/t19387.html
If you're doing fixed zooming (i.e. each zoom level is a fixed distance from the camer), as opposed to fluid zooming (the player can zoom in by 3.3x, 7.5x, and not just 1x, 2x, 3x, etc.) then it's massively wasteful to try to solve this by simply applying a zoom transform. It's tempting because that's the least complicated approach, and it's easy to understand from an implementation standpoint, but that means that at maximum zoom-out, you're going to be rendering an area that's 10x larger in the X direction, and 10x larger in the Y direction - so the area of the world that you have to render is 100x larger than at maximum zoom-in. I also doubt that you'll like the way your textures get squished by the hardware as you're zooming out. Computer graphics isn't the same as optics - subpixel rendering, and other things that happen in computer graphics aren't going to make your textures look very good if you hand that task off the the software/hardware.
Even if you do fluid zooming, I would still do level-of-detail textures, and dynamically swap them out depending on the distance between the world being rendered, and the camera.
Also, 10 zoom levels? Are you sure you really need 10 zoom levels? Zoom is usually used in 2D games to allow you to perform different activities at different levels of detail because a particular zoom level is especially well suited for a certain set of activities. I don't remember any 2D game that needed 10 zoom levels to accomplish this. 3-5 is the most I've ever seen, and I've never felt that it wasn't enough. It also seems like a lot of art work to produce the images at every zoom level for 10 zoom levels.
You're also likely going to find that applying an AffineTransform sounds like a good idea, but that it's extremely computationally expensive, and if you need 60fps performance, you're highly unlikely to achieve it this way. Don't take my word for it though, go try it and see how badly it falls over on itself.