Google Mobile Vision Barcode location on screen - java

I am writing an AR Android application in which I scan QR codes (using the Google Mobile Vision Barcode API) as a trigger and then render objects into the camera view.
As there may be more than one QR code at the same time, I want to access the location of each code on the screen, so that the objects will be rendered on top of each code. At the moment - if there are two QR codes being detected by the camera - the objects will be placed at the exact same position in 3D space.
Example:
There are two QR codes being detected by the camera, one on the left
and one on the right of my camera view. Right now, both objects would
be rendered in the center of my camera view, as this is the default
value of my app.
I want the object related with the left code to
appear on the left and the object related with the right code to
appear on the right.
I have tried calling Barcode.getBoundingBox().exactCenterX() and [...].exactCenterY() to get a rough value of the code's location, but it does not work out well. Unfortunately I can't find a more suitable method in the documentation.
I can't be the only one who needs the code's location on the screen. How did you guys do that?
Thanks a lot.

Related

Is there a Java counterpart for Spritekit's SKActions(moveBy)?

I am working on converting some of my games to android after already publishing them to the IOS App store. I wanted to add a little motion to the objects on the screen. In Spritekit, you can simply use the following to move a node a certain length:
Node.run(SKAction.move(by: CGVector(x,y), duration: time))
This is very useful for giving motion to stuff on the screen and wanted to replicate it in Java. Does anyone know how this can be done?
Unfortunately, it is not that easy to move objects in Android.
There is some difference in iOS and Android, where iOS provides easy to use framework, Android doesn't have any easy to use native framework.
Now how to do it, in Android?
First let's learn some basics:-
iOS has SpriteKit, Android has Canvas
SpriteKit can use concept of Node to display object on screen.
Canvas uses bitmaps to draw objects on screen.
Now, it is too big of a topic to cover on Stackoverflow, but the general things, we need to do is:-
Draw objects on screen, where they should have been initially
Then we have to setup game environment is such a way, that the game runs in a while loop, and which should atleast give user, 60 FPS.
Then add the move object logic, inside the while loop. Thus, giving the user, the illusion that the object moved (2, 6) position in 10 seconds.
I would literally just show you the code, which does that https://bitbucket.org/jenelleteaches/android2d-pongclone/src/master/MainActivity.java
And would recommend reading this book, before moving into 2d game development in Andorid :- Android Game Programming by Example. This is published by Packt

Android Google Tango Set object position wrt Area Description File

Im in this situation, I have an ADF with some 3D object on it (I have the positions of objects saved on DB), now what Im doing is loading ADF,waitting to LOCALIZE and putting 3D objects on it but every time I run the app object appears in different places and I noticed that this depends on the orientation/pose of smartphone when it starts, so in few words how to take in account device pose/orientation and put 3D object in Rajawali3D(graphic engine) with respect to that pose?
Under Unity I had the same problem, I fixed it by checking the "Use Area description" in the Tango Pose Controller prefab. What this does, is that it changes the Frame of reference that Tango is using to localize itself: indeed instead of TANGO_START_OF_SERVICE you need to use TANGO_AREA_DESCRIPTION as a base reference frame.

How to detect the mouse cursor/click in a previously recorded video/image frame

I have some screencasts that explain how to perform a task in a specific software. My objective is to detect where the user clicks on a button or menu item or anything else that leads to a change. One solution to this work is to detect the mouse cursor's location. There are several challenges:
The cursor's icon changes and is not always the same in all videos (e.g. Mac vs Win, arrow shaped and hand shaped cursors).
I tried template matching, but I did not get good results because the display settings of each person who is capturing the video may be different and therefore the size of the cursor will be different.
Calculating the difference between two consequent images can give the mouse cursors only in the output but I need to have only the second or the last image's cursor's location in the output not both of them.
I also tried to find an object tracking sample solution, but they are either for live videos or for multiple objects ( I only need to spot the mouse cursor or the locations where the mouse was clicked on)
I would appreciate if if anyone could suggest a solution, ready to use code (in Java/Matlab/Python), software or API for this work.

Android Capture touches coordinates in whole device

I want to capture the pointer locations in whole android device by providing a service to develop something like Pointer Location in Developer options .
First I have tried addingoverlay view which is always on top to capture the coordinates . but this method didn't let me to get touch events in move actions .
So I decided to try another way . At last I found that for rooted devices , we can get touch events but I don't know how . Could you please give some suggestions ?
thanks in advance
I have been looking for this answer as well. Can you specify how you did the overlay view and captured the coordinates?
I was able to do the overlay as well, with some other answers from here on stackOverFlow, but if I capture the Overlay coordinates, it renders the background/OS useless. Its one or the other, not an actual service in the background running or an overlay recording coords. Its one, or the other.
I have read that this goes against the Android security framework, since it would be a really big flaw when exploited. Although, everything that I have read is from 2012 at the earliest. This is the newest post in 3 years. Any insight?

Metaio, showing multiple content on camera view

I'm trying to add multiple views when detecting QR code. I need to show one button, one image and one text when QR code is recognized. I use metaio SDK-Template project and for now I'm only able to show one thing at a time.
Can someone please point me in right direction?
when you use metaio for detecting QR codes you're lossing the concept of augmented reality because you are scanning for a QR code and you wont be able to set a 3d model into your target or make visible and enable view components on the UI, if you have concerns about the objective about augmented reality foundamentals check the metaio documentation and examples at dev.metaio.com. Regards

Categories

Resources