I want to develop custom keyboard like Pie Control, iOS' AssistiveTouch, the things I want to say floating and round shaped keyboard. Here's mock-up written in JavaScript. and now I'm edit some xml in sdk softkeyboard sample. but this input method is occupying bottom area with gray screen. I want remove and transparent this.
Can I create based with sdk sample?
If develop without sdk, what class do I need to use?
Thanks.
Sorry for the terrible image below!
Related
I am using Firebase ML Kit on android studio to capture an image and then detect text.
Currently I take the picture with the phone's camera and this image is displayed in the app. I click my detect text button and the text appears. But I would like to see some bounding boxes on the images that shows what Firebase ML kit is seeing.
I have found many ways to do this in Kotlin, but I am a new developer and I feel like I need to figure out how to do this in Java before completely changing my whole code to Kotlin.
Thanks
ML Kit provides an API to get boundingbox for recognized text:
https://firebase.google.com/docs/reference/android/com/google/firebase/ml/vision/text/FirebaseVisionText.TextBlock
Also, in the quickstart app, ML Kit provides an example about how to do it:
https://github.com/firebase/quickstart-android/blob/master/mlkit/app/src/main/java/com/google/firebase/samples/apps/mlkit/kotlin/textrecognition/TextGraphic.kt#L32
I'm new to Augment Reality and not having a compatible device to run examples provided for ARCore. I am having a few questions and want them to clear before going further as I'm getting clear about those through any mean. The app I'm working over is gonna perform the following task.
Detect a logo from a product
Create a 3D model of it using AR
display the generated 3D model at the exact same surface
Here is a sample image captured from a box. I want to display the text and logo in the 3D model.
My Questions
is it possible to display both logo and text as a 3D model or AR
supports images only?
Should I use ARCore or OpenCV or any other to do the task? which one is efficient regarding time and memory to implement?
Maybe it would be a discussion-based question but I am literally unable to find a solution for it.
Thanks everyone!
If you do not have ARCore supported device, you can try Vuforia + Unity instead. Vuforia also supports image recognition and overlay with AR. Check out this tutorial for your use case.
If you still want to use ARCore, you should check out Augmented Images feature. The challenge here if your logo has a good score to be able to work nicely for tracking and overlaying AR.
You can check image quality/score with this tool.
Hi I have been searchin for a solution on how I can make something like this possible. I am able to make two seperate views and make one grey an the other white but there isn't the same feel. Google uses a kind of shadowy effect in the white one. Is there a library that will allow me to do so?
you're probably talking about elevation, its a lollipop feature.
Material Design: Objects in 3D space - Elevation
there are many ways to replicate it on older versions of android, you could make your own png background image.
get creative with it, it's a lot of fun
I'm trying to develop an Android app for my own project. I was wondering if it is possible to develop such an app using libGDX, and if not, what do I need to learn to code this? (My only experience with Android app development is through libGDX for some simple games).
I would like to make an app that displays a story beautifully. (Think text rpg, but a poem instead or short story). The user will press a button for the story/poem to advance. (Think powerpoint) There will be a simple gradient background. (The gradient needs to be able to fade to different colours) And the text should fade in and out as well.
I'll need tight control over the audio. Ideally there will be a voiceover that syncs to when the text fades in. Some parts will have sound effects, or when the user reaches the end of a paragraph, the current bgm will fade out (regardless of whether or not the song is finished), and when the user press continue to the next paragraph, a new, different bgm will fade in.
I would just like to know if it is possible to make this app through libGDX. If it is, I'll continue to learn from tutorials. Or else I'll learn whatever is necessary to accomplish this.
In a word, yes.
If you have earlier LibGDX experience, it would be fairly easy to do in LibGDX.
The references required for features that you enumerated are
Drawing a gradient in Libgdx
Streaming music
Sound effects
Actions for fade effects
Hope this helps.
Good luck.
Is it possible to capture the entire screen from Android application code? I'm developing an application like VNC for Android platform.
Regards
I think that depends on what you are trying to capture. I'm sure you can use Moss's method to create a screenshot from your own application - that is, something you render yourself.
As I understand it however, capturing from other views, apps, etc. is designed to be impossible for security reasons. This is to avoid apps being able to take screen shots from other apps, which would make it easy to steal sensitive data.
yes it is. You just need to create a canvas and assign it a Bitmap, then draw to that canvas instead of the canvas you use in your onDraw method and save the bitmap on the SDcard for example.
Just to renind you that this method will work if you handle the drawing, so you should use a custom home screen for it to capture wether you want. (just get the default android home screen :D).
I don't have personal experience with it, but this open source project sounds like it might either solve your problem, or provide you clues as to which API to use:
http://sourceforge.net/projects/ashot/
Screen capturing tool for Android
handsets connected via USB to a
desktop/laptop. It is great for
fullscreen presentations, product
demos, automatic screen recording, or
just a single screenshot. Without
root.