Is the power of 2 rule required in Andengine GLES2, or was it just for GLES1. I know it was recommended to create your BitmapTextureAtlas with width and height values as power of 2, 512x512 for example. But is this necessary? If yes, why?
My understanding is that in GLES2 it's not required. I think/guess it's possible that some edge devices might not have full or complete implementations, so I still do it.
EDIT: "I still do it" as a general rule for production. During testing I have played with it being non power of 2 and it works fine on my devices.
Related
I want to be able to add a value to an integral that I have:
Rotating clockwise would increase the value by one
Rotating counter-clockwise would decrease the value by one
I tried searching and I found "rotary input", but I didn't really manage to put it in practice (I mostly found it all talking about scrollview)
How can I achieve it?
For Compose you can use the onRotaryInputAccumulated modifier from Horologist
Example for volume control
https://github.com/google/horologist/blob/94552e13ea45613cc9b804ee7080b4aa92311d54/audio-ui/src/main/java/com/google/android/horologist/audio/ui/VolumeScreen.kt#L108-L114
The raw events come from onRotaryScrollEvent how to implement positionindicator for bezel (Galaxy watch 4 Classic) wear os 3.0 (jetpack compose)?
For Views use https://developer.android.com/training/wearables/user-input/rotary-input
I am using Java with OpenCV Library to detect Face,Eyes and Mouth using Laptop Camera.
What I have done so far:
Capture Video Frames using VideoCapture object.
Detect Face using Haar-Cascades.
Divide the Face region into Top Region and Bottom Region.
Search for Eyes inside Top region.
Search for Mouth inside Bottom region.
Problem I am facing:
At first Video is running normally and suddenly it becomes slower.
Main Questions:
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Do I have to capture Video Frames in a certain scale? for example (100px X100px)?
Do Haar-Cascades work better in Gray-scale Images?
Does different lighting conditions make difference?
What does the method detectMultiScale(params) exactly do?
If I want to go for further analysis for Eye Blinking, Eye Closure Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
Your help is appreciated!
The following article, would give you an overview of the things going under the hood, I would highly recommend to read the article.
Do Higher Cameras' Resolutions work better for Haar-Cascades?
Not necessarily, the cascade.detectMultiScale has params to adjust for various input width, height scenarios, like minSize and maxSize, These are optional params However, But you can tweak these to get robust predictions if you have control over the input image size. If you set the minSize to smaller value and ignore maxSize then it will work for smaller and high res images as well, but the performance would suffer. Also if you imagine now, How come there is no differnce between High-res and low-res images then you should consider that the cascade.detectMultiScale internally scales the images to lower resolutions for performance boost, that is why defining the maxSize and minSize is important to avoid any unnecessary iterations.
Do I have to capture Video Frames in a certain scale? for example
(100px X100px)
This mainly depends upon the params you pass to the cascade.detectMultiScale. Personally I guess that 100 x 100 would be too small for smaller face detection in the frame as some features would be completely lost while resizing the frame to smaller dimensions, and the cascade.detectMultiScale is highly dependent upon the gradients or features in the input image.
But if the input frame only has face as a major part, and there are no other smaller faces dangling behind then you may use 100 X 100. I have tested some sample faces of size 100 x 100 and it worked pretty well. And if this is not the case then 300 - 400 px width should work good. However you would need to tune the params in order to achieve accuracy.
Do Haar-Cascades work better in Gray-scale Images?
They work only in gray-scale images.
In the article, if you read the first part, you will come to know that it face detection is comprised of detecting many binary patterns in the image, This basically comes from the ViolaJones, paper which is the basic of this algorithm.
Does different lighting conditions make difference?
May be in some cases, largely Haar-features are lighting invariant.
If you are considering different lighting conditions as taking images under green or red light, then it may not affect the detection, The haar-features (since dependent on gray-scale) are independent of the RGB color of input image. The detection mainly depends upon the gradients/features in the input image. So as far as there are enough gradient differences in the input image such as eye-brow has lower intensity than fore-head, etc. it will work fine.
But consider a case when input image has back-light or very low ambient light, In that case it may be possible that some prominent features are not found, which may result in face not detected.
What does the method detectMultiScale(params) exactly do?
I guess, if you have read the article, by this time, then you must be knowing it well.
If I want to go for further analysis for Eye Blinking, Eye Closure
Duration, Mouth Yawning, Head Nodding and Head Orientation to Detect
Fatigue (Drowsiness) By Using Support Vector Machine, any advices?
No, I won't suggest you to perform these type of gesture detection with SVM, as it would be extremely slow to run 10 different cascades to conclude current facial state, However I would recommend you to use some Facial Landmark Detection Framework, such as Dlib, You may search for some other frameworks as well, because the model size of dlib is nearly 100MB and it may not suit your needs i f you want to port it to mobile device. So the key is ** Facial Landmark Detection **, once you get the full face labelled, you can draw conclusions like if the mouth if open or the eyes are blinking, and it works in Real-time, so your video processing won't suffer much.
So I have made a game in eclipse with java for android where a player dodges obstacles that are falling from above and when i tested it on different screen sizes ( Not emulator. real phones! Used a galaxy s4( Bigger screen) and htc(smaller screen)) the speed of the player was different. At the htc the speed was normal but on the galaxy s4 the player was too slow. Its because of the resolution and size differences and now i am just asking for a example of java code how to detect these differences and actually change the speed of the player.I didnt try anything yet except with if statements like:
if (myView.getWidth() < 500)
{
xSpeed = 5;
}
but didnt really work well. If you guys need any more information consider asking please. Im thankful for any kind of help.
I can give you a suggestion which I actually used in my own game because I faced this same issue. The suggestion is: make everything relative i.e. your displacement, velocities, accelerations etc.
For instance if you need to make the speed relative do something like this:
xSpeed = SCREEN_WIDTH/20;
Now if you had a screen width of 200 pixels on one phone, 400 pixels on other and 800 pixels on yet another, your speed will change relatively, being 10, 20, 40 pixels respectively. I hope you understand and please provide code if you require further assistance.
I am looking to support a devices (TVs, tablets , phones)
I have the following directory setup.
res/layout/
res/layout-small/
res/layout-large/
res/layout-xlarge/
I expected layout-xlarge to target only TVs but it seems to target my 10 inch tablet as well.
Can anyone tell me why this is?
Should I try using the screen width directory structure/
e.g. res/layout-sw600dp etc.
Using '-small', '-large', etc, targets the resolution. So a small device with a high DPI would count as an '-xlarge' (For instance, the Nexus 5).
The correct way to do it, as you said, is to use the dp structure (-sw600dp, as you said).
dp in Android is a unit for density independent pixels:
What is the difference between "px", "dp", "dip" and "sp" on Android?
Using 'tvdpi' might be a good idea for targetting TV's as well.
Here is a good resource for this information:
http://developer.android.com/guide/practices/screens_support.html
I use the following:
-sw320dp (or default) for smaller phones
-sw360dp for regular phones
-sw480dp for huge phones
-sw600dp for 7" tablets
-sw720dp for 10" tablets
You can also specify pixel densities, such as -ldpi, -mdpi, -hdpi, -xhdpi, and -xxhdpi.
Here's a screenshot from one of my projects. Here, I attempt to provide the perfect images for all screen sizes and densities. This may be overkill for what you are trying to do.
For more information, please see http://developer.android.com/guide/practices/screens_support.html
I have asked this question in this too. But since the topic was a different one, maybe it was not noticed. I got the eigenface algorithm for face recognition working using opencv in java. I wanted to increase the accuracy of the code as its a well known fact that eigenface relies greatly on the light intensity.
What I have Right Now
I get perfect results if I give a check for a image clicked at the same place where the pictures in my database have been clicked, but the results get weird as I give in images clicked in different places.
I figured out that the reason was that my images differ in the light intensity.
Hence , my question is
Is there any way to set a standard to the images saved in the database or the ones that are coming fresh into the system for a recognition check so that I can improve on the accuracy of the face-recognition system that I have currently?
Any kind of positive solution to the problem would be really helpful.
Identifying the lighting intensity and pose is the important factor of face recognition. Try to do histogram comparison with training and testing image (http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html). This parameter helps to avoid the worst lighting situation. And pre processing is one of the successful key factor of Face recognition. Gamma Correction and DOG filtering may reduce the lighting problems.
You can also elliptical filter out only the face,removing the noise created by hair,neck etc.
The OpenCV cookbook provides an excellent and simple tutorial on this.
Below are the following options which may help you boost your accuracy
1] Image Normalization:
Make your image pixel values from 0 to 1 so that to reduce the effect of lighting conditions
2] Image Alignment (This is a very important step to achieve good performance):
Align all the train images and test images so that eyes, nose, mouth of all the faces in all the images have almost the same co-ordinates
Check this post on face alignment (Highly recommended) : https://www.pyimagesearch.com/2017/05/22/face-alignment-with-opencv-and-python/
3] Data augmentation trick:
You can add filters to you faces that will have an effect of the same face in different lighting conditions
So from one face you can make several images in different lighting conditions
4] Removing Noise:
Before performing step 3 apply Gaussian blur to all the images