How to recognize the box using ARCore? - java

I want to recognize a certain box (like tissue box) using ARCore, ViroCore (or OpenGL) and OpenCV, and display the width, depth, and height of the box.
Use OpenCV to detect the edge through the sobel filter.
Use OpenCV to recognize the edge detected box and acquire coordinates.
Use ARCore to calculate width, depth, height from acquired coordinates.
Use ARCore and ViroCore (or OpenGL) to display the calculated length.
I can not imagine how to implement No. 2.
Is it possible to recognize the box automatically?
If it is possible, how should it be implemented?
[Development environment]
Android Studio 3.0.1(Not Unity!)
Kotlin(or Java)
Samsung Galaxy S8+

I have a feeling that you didn't do any research. ARCore is not a image recognition tool. So it has nothing to do with your problem. You need to use an image/object recognition tool like OpenCV.
About your questions. Yes, it is possible. How to do it? I suggest to read examples, OpenCV has a big library of ready examples like car shape recognition. To recognize a box you can use an edge tracking algorithm

It's not abundantly clear what your intent is, so let me know if this is not what you're looking for. It seems like this tutorial on putting bounding boxes around contours would contain an example of how to fetch the coordinates of the edges.

Related

How to make graphic overlay elements responsiveness in live preview Camera?

Currently I'm developing a project that implements Object detection / tracking but now I need to add a new functionality that when user want to chose an element that is detected using Live Camera Preview may be possible to select it touching the reticle element in the center of their bounding box.
This is how it currently looks my app.
Nowadays I am able to draw a customized bounding box and a customized reticle that fit in the center of the bounding box, Im also able to show many bounding box as needed with mutliple object detection. So far that's what I can do. I see that on presentation on 2019 of the ML Kit a video they use another method using live preview camera, their pipeline consist on hold camera a couple of seconds to make a search. I want to do a similar thing but instead of hold camera an wait a couple of seconds, I want to give the chance to the app user to chose the element.
I use
ML Kit Vision Quickstart Sample App and ML Kit Showcase App with Material Design as base of my project, I was able to implement succesfully different stuff of both projects.
I currently use kotlin but I can also work with Java code, so no problem. Also, if anyone thinks it's necessary, I can add code.
If you're also using a single custom view to draw all these bounding boxes and reticles like MLKit Samples do (e.g. GraphicOverlay in the MLKit Showcase App), then you can override its #onToucheEvent method to simulate a click event on reticle element (touching point is inside the reticle area) and do whatever what you want after it's triggered.

Aligning Kinect V2 RGB picture and depth map using java

I would like to know if there is a way to align the RGB picture and the depth data of a Kinect V2 using the colour data as a starting point using Java, I am actually using java for Kinect as a wrapper and it does not seem to give me the possibility for that. Is there any way doing that?
Finally got around it by using #Spektre answer here, I got to play around with the formulas to make it work but it seem fine to me.
Rectified for my needs, it gives:
int alignx= (((x-512)<<8)/241)+Width;
int aligny= (((y-424)<<8)/240)+25+Height;
It works fine as long as the Kinect is at the same level that the object you want to target (ie: no pitch used).
I don't quite agree with Alex Acquier’s answer, it is not the right approach I feel. I have faced the same issue and I know I am doing this 8 months late, but in the interest of others who came here searching for the solution, I now present it here:
The thing is, you don't have to manually align the RGB and the Depth frames. There is a class already available that can do that for you, "IMultiSourceFrameReader". Using this as a source, you can sure to be making the point cloud in the right way.
Now this is okay if you just want to use the feeds. But if somewhere down your code, if you are going to be using some kind of a coordinate system and if you are going to be needing the coordinates of the RGB and the depth pixels, then you would expect them them to be the same, right? Because you are using the aligned coordinates after all, right? But you won't get the coordinates aligned till to use the "ICoordinateMapper" class. This class will make all the coordinates from all the different sensors, RGB and Infra, to align as well and will return the aligned coordinates.
Please refer to this source, it has been my go to source for Kinect V2 since a long time.

bitmap normalization in android for illumination solving

i have developed an android app for color comparison. and i ve successfully completed the app except solving one problem that is illumination problem. my reference chart is in sdcard as jpeg images I need to compare those images with the images which i takes from camera. i am getting the output but it depends the illuminity. so now i am planing no normalize the bit maps. How to normalize a bitmap.?? i am comparing images using naive similarity method
and please suggest me one good idea to solve the illumination problem. searching metheds sonce last two weeks .
Try to use Catalano Framework. Here contains the article that you can learn how to use.
I implemented Image Normalization using this approach.
FastBitmap fb = new FastBitmap("image.jpg");
fb.toGrayscale();
ImageNormalization in = new ImageNormalization();
in.applyInPlace(fb);
here is a link asking a similar question about luminosity, so I won't repeat everything.
Color is perceptual, not an absolute. Take a red car and park it under a street light and the car will appear to be orange. Obviously the color of the paint has not changed, only your perception of the color has changed.
Any time you take a photo of color, the light used to illuminate the image will change the results you get. Most cameras have a light balance control, where most people use auto. The auto control looks something to call white, then shifts the image to make the white look white. This does not mean the rest of the colors will be correct.
Take something like a colorful stuffed animal (or a few) and take its photo outside in sunlight, under an incandescent bulb, under florescent light and in a dark room with the camera flash. How similar are the colors? If you have photoshop, look at the color curves in photoshop.
To match color, you need an objective standard, such as a color card included in the photo. Then the software brightness and color correct the known card, then measure the other color to the known standards.

Find Specific Color in an Image

I have an image where I want to find a specific location (coordinate) based on its color. For example
I want to find coordinates of edges of this Black box.
How can i detect Black color in this image in Java?
Note: My target is to develop a program to detect eyes in a face
I would suggest using threshold filter and then convert image to 1-bit format. This should do the trick.
However locating eyes in image is much more harder. You might be interested in open source OpenCV library. Here is port dedicated for Java - javacv. And C++ example of face detection using OpenCV.
as far as I know Fourier transform is used in image processing. With that you get your picture in frequency domain that represents a signal (in case of image signal is two dimensional). You can use Fast Fourier Transform algorimth (FFT in java, Fun with Java, Understanding FFT). There's a lot of papers about eye detection issue you can read and take inspiration from:
http://www.jprr.org/index.php/jprr/article/viewFile/15/7
http://people.ee.ethz.ch/~bfasel/papers/avbpa_face.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.2226&rep=rep1&type=pdf

Object detection with a generic webcam

Here’s my task which I want to solve with as little effort as possible (preferrably with QT & C++ or Java): I want to use webcam video input to detect if there’s a (or more) crate(s) in front of the camera lens or not. The scene can change from "clear" to "there is a crate in front of the lens" and back while the cam feeds its video signal to my application. For prototype testing/ learning I have 2-3 images of the “empty” scene, and 2-3 images with one or more crates.
Do you know straightforward idea how to tackle this task? I found OpenCV, but isn't this framework too bulky for this simple task? I'm new to the field of computer vision. Is this generally a hard task or is it simple and robust to detect if there's an obstacle in front of the cam in live feeds? Your expert opinion is deeply appreciated!
Here's an approach I've heard of, which may yield some success:
Perform edge detection on your image to translate it into a black and white image, whereby edges are shown as black pixels.
Now create a histogram to record the frequency of black pixels in each vertical column of pixels in the image. The theory here is that a high frequency value in the histogram in or around one bucket is indicative of a vertical edge, which could be the edge of a crate.
You could also consider a second histogram to measure pixels on each row of the image.
Obviously this is a fairly simple approach and is highly dependent on "simple" input; i.e. plain boxes with "hard" edges against a blank background (preferable a background that contrasts heavily with the box).
You dont need a full-blown computer-vision library to detect if there is a crate or no crate in front of the camera. You can just take a snapshot and make a color-histogram (simple). To capture the snapshot take a look here:
http://msdn.microsoft.com/en-us/library/dd742882%28VS.85%29.aspx
Lots of variables here including any possible changes in ambient lighting and any other activity in the field of view. Look at implementing a Canny edge detector (which OpenCV has and also Intel Performance Primitives have as well) to look for the outline of the shape of interest. If you then kinda know where the box will be, you can perhaps sum pixels in the region of interest. If the box can appear anywhere in the field of view, this is more challenging.
This is not something you should start in Java. When I had this kind of problems I would start with Matlab (OpenCV library) or something similar, see if the solution would work there and then port it to Java.
To answer your question I did something similar by XOR-ing the 'reference' image (no crate in your case) with the current image then either work on the histogram (clustered pixels at right means large difference) or just sum the visible pixels and compare them with a threshold. XOR is not really precise but it is fast.
My point is, it took me 2hrs to install Scilab and the toolkits and write a proof of concept. It would have taken me two days in Java and if the first solution didn't work each additional algorithm (already done in Mat-/Scilab) another few hours. IMHO you are approaching the problem from the wrong angle.
If really Java/C++ are just some simple tools that don't matter then drop them and use Scilab or some other Matlab clone - prototyping and fine tuning would be much faster.
There are 2 parts involved in object detection. One is feature extraction, the other is similarity calculation. Some obvious features of the crate are geometry, edge, texture, etc...
So you can find some algorithms to extract these features from your crate image. Then comparing these features with your training sample images.

Categories

Resources