I have a video on my app and I'm trying to achieve that when the user rotates his device from portrait to landscape the video changes to full-screen.
I'm using OrientationEventListener like this:
orientationEventListener = new OrientationEventListener(this, SensorManager.SENSOR_DELAY_NORMAL) {
#Override
public void onOrientationChanged(int orientation) {
if (orientation <= 45 && playerManager.isFullscreen()) {
onPlayerFullscreenChange(false); //ORIENTATION_PORTRAIT
} else if (orientation <= 135 && !playerManager.isFullscreen()) {
onPlayerFullscreenChange(true); //ORIENTATION_LANDSCAPE
} else if (orientation <= 225 && playerManager.isFullscreen()) {
onPlayerFullscreenChange(false); //ORIENTATION_PORTRAIT
} else if (orientation <= 315 && !playerManager.isFullscreen()) {
onPlayerFullscreenChange(true); //ORIENTATION_LANDSCAPE
}
}
};
The problem is that this listener gets called so many times that my video can't play normally. The activity ends up going throw the OnCreate multiple times unlike before.
The onOrientationChanged(int orientation) method notifies you everytime the sensor detects a change in the way you are holding the Android device.
It is nearly impossible for a user to hold the device still, thus the onOrientationChanged() gets called multiple times.
What I have understood from your question is that, you only want to display a video in fullscreen when the user holds the device horizontally.
Thus consider angular values of 0-45, 135-255, and 315-360 as VERTICAL.
And, angles between 45-135 and 225-315 as HORIZONTAL.
This will make sense if you give it some thought.
Store the "previous orientation", which you can set to null initially for example, if you use String. Then compare it with the currently detected orientation, if they are not the same, take an action (set video to full screen or vice-versa) and save the current orientation values as "previous orientation".
This problem really comes down to better implementing your algorithm. All the best!
Related
I am currently using the code here, albeit heavily modified to suit my needs
https://stackoverflow.com/a/28186621/4541217
such as I need to take an image from the camera as well as select from the gallery. I am also zooming the image.
This all works nicely, except for one issue. I lose things when I rotate the device.
I have
bTemp = null;
if(getLastNonConfigurationInstance() != null) {
bTemp = getLastNonConfigurationInstance();
}
in my onCreate, and an override...
#Override
#Deprecated
public Object onRetainNonConfigurationInstance() {
return bTemp;
}
I can make this return the image but I lose all of my stroke information.
From the example, I have tried saving the Uri, the alteredBitmap, the bitmap and the choosenImageView. However, none of these are working. If I take a photo, scribble on it, then before doing anything else, using the alteredBitmap, if I rotate, then I get the first set of strokes. However, nothing after that.
Can anyone help me to keep my stroke information on rotate please?
Learn about the activity lifecycle.
You need to override functions like onPause, onResume, and use the savedInstanceState.
I managed to work it out eventually, so for anyone else that is trying to do the same, here is what I did.
Following on from the example link in my opening post, in order to make it stick while rotating...
in the onRetainNonConfigurationInstance, keep the alteredBitmap. (This is in the Activity)
#Override
#Deprecated
public Object onRetainNonConfigurationInstance() {
return alteredBitmap;
}
then, in the onCreate of the activity...
if(getLastNonConfigurationInstance() != null) {
bmp = (Bitmap)getLastNonConfigurationInstance();
alteredBitmap = Bitmap.createBitmap(bmp.getWidth(), bmp.getHeight(), bmp.getConfig());
choosenImageView.setNewImage(alteredBitmap, bmp);
}
notice that the "bmp" is what was sent from alteredBitmap, and now alteredBitmap is the new image. This is then passed into the setNewImage in the DrawableImageView.
I have a problem that I have been unable to solve in a way that I am very happy with.
I have a view that I am dragging and dropping into a list. That list is created using a recyclerView. The drag object works fine, and the recyclerView's items can all receive the events no problem. Now I want to make the list scroll as the user drags their finger close to the top or bottom of the list. My first step was to add a dragEvent listener to the recyclerView, and attempt to start scrolling each time I got a location near the top or bottom edge. So, my DragEvent.Location case looks something like this:
case DragEvent.ACTION_DRAG_LOCATION: {
removeDragScrollCallBack();
float y = event.getY();
final int scrollAreaHeight = v.getHeight()/4;
final int delayMills = 16;
int scrollAmount = 0;
if (y > v.getHeight() - scrollAreaHeight) {
scrollAmount = 10;
} else if (y < scrollAreaHeight) {
scrollAmount = -10;
}
if (Math.abs(scrollAmount) > 0) {
final int finalScrollAmount = scrollAmount;
dragScrollRunnable = new Runnable() {
#Override
public void run() {
if (canScrollVertically(finalScrollAmount)) {
scrollBy(0, finalScrollAmount);
if (dragScrollHandler != null && dragScrollRunnable != null) {
dragScrollHandler.postDelayed(this, delayMills);
}
}
}
};
dragScrollRunnable.run();
}
return true;
}
It kinda works. Things scroll in the right direction. It seems to sputter a bit though, and generally not scroll very smoothly. Additionally, the drag and drop drop event sometimes doesn't make it to the children while the recycler view is still scrolling.
So, I went to the google example of doing a similar thing in a using a list view - link. I modified the code they used for their list view and tried to handle my recyclerView in a similar manner. This had even poorer results for me.
I have tried various other alterations of these techniques, and swapped to using the smoothScroll function instead of the standard scroll function, but I'm not too happy with any of the results.
Does anyone have a good solution for how to handle this?
Update: I now believe that many of my problems with this functionality are due to the drag listener being fairly unreliable. At sometimes the recycler fails to get events when it's children are receiving events.
Turns out the drag listener on a view is not terribly reliable. At random times as I moved my finger around the screen, the drag listener wouldn't recieve all of the events. I believe the reason for this was the way that the children of the recyclerView were also recieving the on drag callbacks. The solution was to do what I had tried originally, but through a listener on the fragment itself. Now when I get an event, I check the coordinates to see what view it is in, and then convert it to local coordinates for that view. Then I can determine exactly how I need to handle it.
I want to use a multipane layout for wider screens. The data is persisted with SQL and each fragment fetches the right data. The extra layout xml files are in resource directory folders.(i.e. layout-w500dp) But i have some strange behavior.
It only seems to work after I select something and then press the back button.
Atm I am using max two FrameLayouts but later I want to do it with four.
I check the level of the deepest selection and assign the fragments accordingly. (Here its only down to lvl 1, but later I need selections up to lvl3).
Here is what I want to achieve.
This gets called in onCreate and when a selection has been made.
private void setScreens(){
int i = getLowestSelection();//returns 0 when nothing is selected.
//And 1 if selection is made in lvl1 ...
int p = 1;
FragmentTransaction transaction = getSupportFragmentManager().beginTransaction();
if (findViewById(R.id.fragtwo) != null) {
p = 2;
if (i == 1){
SectionsScreen secondFragment = new SectionsScreen();
transaction.replace(R.id.fragtwo,secondFragment);
}
}
if (findViewById(R.id.fragone) != null) {
if(p == 2){
if (i == 0 ){
StatuteScreen statuteScreenFragment = new StatuteScreen();
transaction.replace(R.id.fragone,statuteScreenFragment);
}
}
if (p == 1){
if (i == 0){
StatuteScreen statuteScreenFragment = new StatuteScreen();
transaction.replace(R.id.fragone,statuteScreenFragment);
}
else if (i == 1){
SectionsScreen sectionsScreenFragment = new SectionsScreen();
transaction.replace(R.id.fragone,sectionsScreenFragment);
}
}
}
transaction.addToBackStack(null);
transaction.commit();
}
It only works at the moment if I do the following.
Start application = 1 fragment in portrait and landscape (this is the desired behavior)
Make selection in Portrait = nothing happens !!!! (Here is the problem)
Switch to Landscape = 2 Fragments with the right selection (right behavior) (if I make the initial selection in landscape I need to rotate to Portrait and back again)
Switch to Portrait = LvL 2 Fragment with right Data ( right behavior)
Press Back Button = LvL 1 Fragment (right behavior)
From now on I can switch between portrait to landscape orientation and i get the right behavior for selecting items in all orientations. Even on backpress in landscape showing only one fragment with lvl 1 when selection is taken away.
Why am i getting this behavior?
And is this the right approach in the firstplace?
Considering I want to extend this for further levels and screenWidths!
i.e.:
will backstack function properly here? If anyone needs additional info, just say and i'll be happy to add it!
Silly mistake. I save the selections in an Application Class and i instatiate that in onCreate but i need to reinstantiate before i get the current selection in the getLowestSelection() method
So basically, i have this code,
if(mCamera.getParameters().getMaxNumDetectedFaces()==0)
{
System.out.println("Face detection not avaliable");
}
else
{
System.out.println("Max faces: " + Integer.toString(mCamera.getParameters().getMaxNumDetectedFaces()));
}
mCamera.setFaceDetectionListener(new FaceDetectionListener() {
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
// TODO Auto-generated method stub
System.out.println("Face detection callback called." + Integer.toString(faces.length));
}
});
After calling mCamera.startFaceDetection();, the callback is called, everything works as normal. However, if I change cameras, the same code results in the callback never being called. The getMaxNumDetectedFaces, returns 35 for both cameras, so I assume its supported on the front camera. I can change the camera back and forth, calling this code each time, and it will work for the back camera but not the front one.
Is there anything else I might be doing wrong?
Is it possible that the quality of the camera that's not working (the front one, right?) Isn't accurate enough for the face detection to work? The camera's image may be too noisy for the face detector to work. There are lot of other variables that could be hindering this.
Also doing a search for front camera, it looks like the front camera's points may be mirrored. This is described in: http://developer.android.com/reference/android/hardware/Camera.Face.html
I hope this helps.
Is there a way to check if the camera is being read? Java has always had some issues in registering web cams etc.... Perhaps try to make sure you can see images with the webcam.
Btw, if you want any further help, we will need to know more about the code. library etc....
This code will return the id of your Front facing camera, for others you can change camera.CameraInfo:
private int findFrontFacingCamera() {
int cameraId = -1;
// Search for the front facing camera
int numberOfCameras = Camera.getNumberOfCameras();
for (int i = 0; i < numberOfCameras; i++) {
Camera.CameraInfo info = new Camera.CameraInfo();
Camera.getCameraInfo(i, info);
if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
Log.d("FaceDetector", "Camera found");
cameraId = i;
break;
}
}
return cameraId;
}
I had the code which worked on my Gallaxy tablet but it wouldnt call the take foto and as a result wouldnt call face detection in other devices, so after searching for a while I found this solution which worked. I added the following code in the class where takePicture is called :
camera.startPreview();
You can use Webcame for capturing image from webcam. it automatically detects webcam so no need to extra configuration for webcam. it also support more than one webcam at a time.
I'm having a really weird problem while following the Gestures tutorial here: http://developer.android.com/resources/articles/gestures.html.
4 unique gestures are created in Gesture Builder: + - × /
Add and multiply are multi-stroke. Subtract and divide are single stroke.
The Activity loads the pre-built GestureLibrary (R.raw.gestures), adds an OnGesturePerformedListener to the GestureOverlayView, and ends with onGesturePerformed() when a gesture is detected & performed.
Activity layout in XML
<android.gesture.GestureOverlayView
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="#+id/gestures"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:gestureStrokeType="multiple"
android:eventsInterceptionEnabled="true"
android:orientation="vertical"
/>
Located in onCreate()
mLibrary = GestureLibraries.fromRawResource(this, R.raw.gestures);
if (!mLibrary.load()) {
finish();
}
GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures);
gestures.addOnGesturePerformedListener(this);
Located in onGesturePerformed()
ArrayList<Prediction> predictions = mLibrary.recognize(gesture);
// We want at least one prediction
if (predictions.size() > 0) {
Prediction prediction = predictions.get(0);
// We want at least some confidence in the result
if (prediction.score > 1.0) {
// Show the spell
Toast.makeText(this, prediction.name, Toast.LENGTH_SHORT).show();
}
}
The main problem is the pre-built gestures are not being recognized correctly. For example, onGesturePerformed() is never performed if a horizontal gesture is followed by a vertical (addition). The method is called if a vertical gesture is followed by a horizontal. This is how GesturesListDemo behaves too (GesturesDemos.zip # code.google.com).
Furthermore, the predicted gesture ends up being the incorrect gesture. Add is recognized as subtract; multiply as divide; Subtract as add. It's completely inconsistent.
Finally, add and subtract gestures typically share similar Prediction.score's, which is weird since they differ by an entire stroke.
Sorry about the long post -- wanted to be thorough. Any advice would be greatly appreciated, thanks all.
I realize this is a really old question, but it hasn't been answered yet and this might help someone. I just came across a similar problem and my solution was to use gesture.getStrokesCount() to differentiate between single- and multi-stroke gestures.
Example:
ArrayList<Prediction> predictions = mLibrary.recognize(gesture);
// We want at least one prediction
if (predictions.size() > 0) {
Prediction prediction = predictions.get(0);
// We want at least some confidence in the result
if (prediction.score > 1.0) {
if (gesture.getStrokesCount() > 1) {
// This is either add or multiply
}
else {
// This is either subtract or divide
}
}
}
Building on Drew Lederman his answer, you can use this implementation of onGesturePerformed to always get the best result:
public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) {
ArrayList<Prediction> predictions = store.recognize(gesture);
double highScore = 0;
String gestureName = "";
for (Prediction prediction : predictions) {
if (prediction.score > SCORE && prediction.score > highScore &&
store.getGestures(prediction.name).get(0).getStrokesCount() == gesture.getStrokesCount()) {
highScore = prediction.score;
gestureName = prediction.name;
}
}
// Do something with gestureName
}
Yes, are supported, at least from Android 2.3.3. But it is heavily imprecise. Check the second example.