I want to be able to add a value to an integral that I have:
Rotating clockwise would increase the value by one
Rotating counter-clockwise would decrease the value by one
I tried searching and I found "rotary input", but I didn't really manage to put it in practice (I mostly found it all talking about scrollview)
How can I achieve it?
For Compose you can use the onRotaryInputAccumulated modifier from Horologist
Example for volume control
https://github.com/google/horologist/blob/94552e13ea45613cc9b804ee7080b4aa92311d54/audio-ui/src/main/java/com/google/android/horologist/audio/ui/VolumeScreen.kt#L108-L114
The raw events come from onRotaryScrollEvent how to implement positionindicator for bezel (Galaxy watch 4 Classic) wear os 3.0 (jetpack compose)?
For Views use https://developer.android.com/training/wearables/user-input/rotary-input
Related
I want to recognize a certain box (like tissue box) using ARCore, ViroCore (or OpenGL) and OpenCV, and display the width, depth, and height of the box.
Use OpenCV to detect the edge through the sobel filter.
Use OpenCV to recognize the edge detected box and acquire coordinates.
Use ARCore to calculate width, depth, height from acquired coordinates.
Use ARCore and ViroCore (or OpenGL) to display the calculated length.
I can not imagine how to implement No. 2.
Is it possible to recognize the box automatically?
If it is possible, how should it be implemented?
[Development environment]
Android Studio 3.0.1(Not Unity!)
Kotlin(or Java)
Samsung Galaxy S8+
I have a feeling that you didn't do any research. ARCore is not a image recognition tool. So it has nothing to do with your problem. You need to use an image/object recognition tool like OpenCV.
About your questions. Yes, it is possible. How to do it? I suggest to read examples, OpenCV has a big library of ready examples like car shape recognition. To recognize a box you can use an edge tracking algorithm
It's not abundantly clear what your intent is, so let me know if this is not what you're looking for. It seems like this tutorial on putting bounding boxes around contours would contain an example of how to fetch the coordinates of the edges.
I'm developing an app which needs maximum zoom set to level way past upper limit in Google Maps V2 (for an indoor navigation). I'm looking for a level 22, 23 or even 24.
Two solutions came into my mind:
Max zoom limit in Google Maps can be overwritten.
Use translation and render one small tile into a big square build from nine tiles or more. Tile quality is not an issue for me.
Are any of these ways possible? Or maybe there is another map engine, that supports zooming past level 21? Thank you in advance for your help.
Maybe googleMap.setMaxZoomPreference(value), but I didn't try.
You can use inbuilt method,
googleMap.getMaxZoomLevel()
Using above method you will get maximum zoom level, you don't need to set it manually.
I am looking to support a devices (TVs, tablets , phones)
I have the following directory setup.
res/layout/
res/layout-small/
res/layout-large/
res/layout-xlarge/
I expected layout-xlarge to target only TVs but it seems to target my 10 inch tablet as well.
Can anyone tell me why this is?
Should I try using the screen width directory structure/
e.g. res/layout-sw600dp etc.
Using '-small', '-large', etc, targets the resolution. So a small device with a high DPI would count as an '-xlarge' (For instance, the Nexus 5).
The correct way to do it, as you said, is to use the dp structure (-sw600dp, as you said).
dp in Android is a unit for density independent pixels:
What is the difference between "px", "dp", "dip" and "sp" on Android?
Using 'tvdpi' might be a good idea for targetting TV's as well.
Here is a good resource for this information:
http://developer.android.com/guide/practices/screens_support.html
I use the following:
-sw320dp (or default) for smaller phones
-sw360dp for regular phones
-sw480dp for huge phones
-sw600dp for 7" tablets
-sw720dp for 10" tablets
You can also specify pixel densities, such as -ldpi, -mdpi, -hdpi, -xhdpi, and -xxhdpi.
Here's a screenshot from one of my projects. Here, I attempt to provide the perfect images for all screen sizes and densities. This may be overkill for what you are trying to do.
For more information, please see http://developer.android.com/guide/practices/screens_support.html
I am working on a photobooth type app for iPhone and Android. On iPhone, I know the exact resolution of the front camera, and am able to always output 4 mini pics predictably and make a photostrip from them. But for Android, I need a way to resize 4 images I have taken to a width of 48px and height of 320px per image. This way, I can build the same size photostrip I built for the iPhone version, and easily display the photostrips in a consistent manner on a website (I don't want the size of them to vary depending on platform). On Android, how can I resize to that resolution (48x320), even if the Android camera doesn't output that aspect ratio? Basically, I'd like to resize on Android, and have it automatically zoom as necessary until 48x320 is reached and it doesn't look stretched/distorted...I'm ok with part of the image (like the outside border) being lost in favor of getting a 48x320 image. Maybe this is just a straight Java question...
Thanks so much!
Is the power of 2 rule required in Andengine GLES2, or was it just for GLES1. I know it was recommended to create your BitmapTextureAtlas with width and height values as power of 2, 512x512 for example. But is this necessary? If yes, why?
My understanding is that in GLES2 it's not required. I think/guess it's possible that some edge devices might not have full or complete implementations, so I still do it.
EDIT: "I still do it" as a general rule for production. During testing I have played with it being non power of 2 and it works fine on my devices.