take picture when face detected using FaceDetector google-vision - java

I found the demo code here: https://github.com/googlesamples/android-vision/blob/master/visionSamples/FaceTracker/app/src/main/java/com/google/android/gms/samples/vision/face/facetracker/FaceTrackerActivity.java
and my question is how to take picture when face detected and save it to device, and when we take 1st picture next picture will be take after 5s when face detected because we can't save to many picture to device.

You have to add FaceDetectionListener in camera API then call startFaceDetection() method,
CameraFaceDetectionListener fDListener = new CameraFaceDetectionListener();
mCamera.setFaceDetectionListener(fDetectionListener);
mCamera.startFaceDetection();
Implement Camera.FaceDetectionListener, you receive the detected face in onFaceDetection override method,
private class MyFaceDetectionListener
implements Camera.FaceDetectionListener {
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
if (faces.length == 0) {
Log.i(TAG, "No faces detected");
} else if (faces.length > 0) {
Log.i(TAG, "Faces Detected = " +
String.valueOf(faces.length));
public List<Rect> faceRects;
faceRects = new ArrayList<Rect>();
for (int i=0; i<faces.length; i++) {
int left = faces[i].rect.left;
int right = faces[i].rect.right;
int top = faces[i].rect.top;
int bottom = faces[i].rect.bottom;
Rect uRect = new Rect(left0, top0, right0, bottom0);
faceRects.add(uRect);
}
// add function to draw rects on view/surface/canvas
}
}
As per your case, new Handler().postDelayed(new Runnable,long seconds) take 2nd picture inside runnable after 5 seconds.
Please let me know if you have any queries.

Related

GridView only create Images when they would be visible

Hei there
So I have the following problem. I have around 1500 images of playing cards. I want to display them in a "Gallery" where you could scroll through them. I was able to create a GridView with the ImageCell object and I was also able to add images to it. Now my problem is that if I add all Image's at once logically Java crashes because of a heap error. I have the image url's (local files) in a list. How could I implement that it only load lets say 15 images. If I then scroll it loads the next 15 and unloads the old ones. So it would only load the images of the actual visible images and not all 1500. How would I do this? I am completely out of ideas.
The Platform.runLater() is needed because some sort of issue with ControlsFX
Here my code:
public void initialize() {
GridView<Image> gridView = new GridView<>();
gridView.setCellFactory(gridView1 -> new ImageGridCell(true));
Image image = new Image("C:\\Users\\nijog\\Downloads\\cardpictures\\01DE001.png");
gridView.setCellWidth(340);
gridView.setCellHeight(512);
//Platform.runLater(()-> {
// for (int i = 0; i < 5000; i++){
// gridView.getItems().add(image);
// }
//});
Platform.runLater(() -> gridView.getItems().addAll(createImageListFromCardFiles()));
borderPane.setCenter(gridView);
}
protected List<Image> createImageListFromCardFiles(){
List<Image> imageViewList = new ArrayList<>();
App.getCardService().getCardArray().stream()
//.filter(Card::isCollectible)
.sorted(Comparator.comparingInt(Card::getCost))
.sorted(Comparator.comparing(Card::isChampion).reversed())
.skip(0)
//.limit(100)
.forEach(card -> {
try {
String url = String.format(App.pictureFolderPath +"%s.png", card.getCardCode());
imageViewList.add(new Image(url));
} catch (Exception e) {
System.out.println("Picture file not found [CardCode = " + card.getCardCode() + "]");
App.logger.writeLog(Logger.Operation.EXCEPTION, "Picture file not found [CardCode = " + card.getCardCode() + "]");
}
});
return imageViewList;
}
You might not need to use the strategy you describe. You're displaying the images in cells of size 340x512, which is 174,080 pixels. Image storage is 4 bytes per pixel, so this is 696,320 bytes per image; 1500 of them will consume about 1GB. You just need to make sure you load the image at the size you are displaying it (instead of its native size):
// imageViewList.add(new Image(url));
imageViewList.add(new Image(url, 340, 512, true, true, true));
If you need an image at full size later (e.g. if you want the user to select an image from your grid view and display it in a bigger pane), you'd just need to reload it from the url.
If you do need to implement the strategy you describe, GridView supports that out of the box. Just keep a list of the URLs, instead of the Images, and use a custom GridCell to load the image as needed. This will consume significantly less memory, at the cost of a lot more I/O (loading the images) and CPU (parsing the image format).
Make the items for the GridView the image urls, stored as Strings.
Then you can do something like:
GridView<String> gridView = new GridView<>();
gridView.getItems().addAll(getAllImageURLs());
gridView.setCellFactory(gv -> new GridCell<>() {
private final ImageView imageView = new ImageView();
{
imageView.fitWidthProperty().bind(widthProperty());
imageView.fitHeightProperty().bind(heightProperty());
imageView.setPreserveRatio(true);
}
#Override
protected void updateItem(String url, boolean empty) {
super.updateItem(url, empty);
if (empty || url == null) {
setGraphic(null);
} else {
double w = getGridView().getCellWidth();
double h = getGridView().getCellHeight();
imageView.setImage(new Image(url, w, h, true, true, true));
setGraphic(imageView);
}
}
});
protected List<String> getAllImageURLs(){
return App.getCardService().getCardArray().stream()
// isn't the first sort redundant here?
.sorted(Comparator.comparingInt(Card::getCost))
.sorted(Comparator.comparing(Card::isChampion).reversed())
.map(card -> String.format(App.pictureFolderPath +"%s.png", card.getCardCode()))
.collect(Collectors.toList());
}

Android Face Detection dont finding faces

I want to find a face on an image from camera. But detector can't find faces. My app does photo and save it in file.
Below code which create file, start camera, and in onActivityResult in trying detect face and save file path to the room, its saving correctrly and showing in recycler view as expected, but face detector dont finding faces. how can i fix this?
private fun takePhoto() {
val takePictureIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
if (takePictureIntent.resolveActivity(activity?.packageManager!!) != null) {
val photoFile: File
try {
photoFile = createImageFile()
} catch (e: IOException) {
error { e }
}
val photoURI = FileProvider.getUriForFile(activity?.applicationContext!!, "com.nasibov.fakhri.neurelia.fileprovider", photoFile)
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, photoURI)
takePictureIntent.putExtra("android.intent.extras.CAMERA_FACING", 1)
startActivityForResult(takePictureIntent, PhotoFragment.REQUEST_TAKE_PHOTO)
}
}
#Suppress("SimpleDateFormat")
private fun createImageFile(): File {
val date = SimpleDateFormat("yyyyMMdd_HHmmss").format(Date())
val fileName = "JPEG_$date"
val filesDir = activity?.getExternalFilesDir(Environment.DIRECTORY_PICTURES)
val image = File.createTempFile(fileName, ".jpg", filesDir)
mCurrentImage = image
return mCurrentImage
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
if (requestCode == REQUEST_TAKE_PHOTO && resultCode == Activity.RESULT_OK) {
val bitmap = BitmapFactory.decodeFile(mCurrentImage.absolutePath)
val frame = Frame.Builder().setBitmap(bitmap).build()
val detectedFaces = mFaceDetector.detect(frame)
mViewModel.savePhoto(mCurrentImage)
}
}
Android Face detection API tracks face in photos, videos using some landmarks like eyes, nose, ears, cheeks, and mouth.
Rather than detecting the individual features, the API detects the face at once and then if defined, detects the landmarks and classifications. Besides, the API can detect faces at various angles too.
https://www.journaldev.com/15629/android-face-detection
Android SDK contains an API for Face Detection: android.media.FaceDetector class. This class detects faces on the image. To detect faces call findFaces method of FaceDetector class. findFaces method returns a number of detected faces and fills the FaceDetector.Faces[] array. Please note, that findFaces method supports only bitmaps in RGB_565 format at this time.
Each instance of the FaceDetector.Face class contains the following information:
Confidence that it’s actually a face – a float value between 0 and 1.
Distance between the eyes – in pixels.
Position (x, y) of the mid-point between the eyes.
Rotations (X, Y, Z).
Unfortunately, it doesn’t contain a framing rectangle that includes the detected face.
Here is sample source code for face detection. This sample code enables a custom View that shows a saved image from an SD Card and draws transparent red circles on the detected faces.
class Face_Detection_View extends View {
private static final int MAX_FACES = 10;
private static final String IMAGE_FN = "face.jpg";
private Bitmap background_image;
private FaceDetector.Face[] faces;
private int face_count;
// preallocate for onDraw(...)
private PointF tmp_point = new PointF();
private Paint tmp_paint = new Paint();
public Face_Detection_View(Context context) {
super(context);
// Load an image from SD Card
updateImage(Environment.getExternalStorageDirectory() + "/" + IMAGE_FN);
}
public void updateImage(String image_fn) {
// Set internal configuration to RGB_565
BitmapFactory.Options bitmap_options = new BitmapFactory.Options();
bitmap_options.inPreferredConfig = Bitmap.Config.RGB_565;
background_image = BitmapFactory.decodeFile(image_fn, bitmap_options);
FaceDetector face_detector = new FaceDetector(
background_image.getWidth(), background_image.getHeight(),
MAX_FACES);
faces = new FaceDetector.Face[MAX_FACES];
// The bitmap must be in 565 format (for now).
face_count = face_detector.findFaces(background_image, faces);
Log.d("Face_Detection", "Face Count: " + String.valueOf(face_count));
}
public void onDraw(Canvas canvas) {
canvas.drawBitmap(background_image, 0, 0, null);
for (int i = 0; i < face_count; i++) {
FaceDetector.Face face = faces[i];
tmp_paint.setColor(Color.RED);
tmp_paint.setAlpha(100);
face.getMidPoint(tmp_point);
canvas.drawCircle(tmp_point.x, tmp_point.y, face.eyesDistance(),
tmp_paint);
}
}
}

Processing: Label text slow to update via network events

I'm working on a sketch that is receiving network events from an external program (specifically, an OpenFrameworks sketch), using the processing.net library.
Inside the draw method, I have the following code to parse the incoming data, and assign it appropriately to display a value of text in a text label:
void draw()
{
// check for incoming data
Client client = server.available();
if (client != null) {
// check for a full line of incoming data
String line = client.readStringUntil('\n');
if (line != null) {
//println(line);
int val = int(trim(line)); // extract the predicted class
//println(val);
if (val == 1) {
messageText = "EVENT 1";
} else if (val == 2) {
messageText = "EVENT 2";
} else if (val == 3) {
messageText = "EVENT 3";
}
}
}
// draw
background(0);
textFont(f,64);
fill(255);
textAlign(CENTER);
text(messageText, width/2, height/2);
}
Through logging, I have verified that the data is being received properly
However, I'm experiencing a very annoying bug - the text of my messageText label is VERY slow to update...after a new event has occurred (and is shown as such through logging), the messageText will still display the value of the last event for several seconds.
Anyone have any pointers on how to speed up performance here?
Thanks!
EDIT: Below is the full, complete sketch code:
import processing.net.*; // include the networking library
Server server; // will receive predictions
String messageText;
PFont f;
void setup()
{
fullScreen();
//size(200,200);
server = new Server(this, 5204); // listen on port 5204
messageText = "NO HAND";
f = createFont("Arial",16,true); // Arial, 16 point, anti-aliasing on
}
void draw()
{
// check for incoming data
Client client = server.available();
if (client != null) {
// check for a full line of incoming data
String line = client.readStringUntil('\n');
if (line != null) {
//println(line);
int val = int(trim(line)); // extract the predicted class
//println(val);
if (val == 1) {
messageText = "EVENT 1";
} else if (val == 2) {
messageText = "EVENT 2";
} else if (val == 3) {
messageText = "EVENT 3";
}
}
}
// draw
background(0);
textFont(f,64);
fill(255);
textAlign(CENTER);
text(messageText, width/2, height/2);
}
EDIT2 As Kevin pointed out below, my solution is rather hacky. I'm attempting to use the Message Events methods from the Networking library, rather than stuffing all my networking code inside of the draw() method.
So, I tried implementing the clientEvent method as such. However, I think I may be misunderstanding something...even though my original, hacky code seems to work OK now, my new code below using this delegate method doesn't seem to work at all. Basically, I have to run my sketch first, which creates a server, that my external program connects to. That program then sends out data that's received by my Processing sketch.
Here's what my full sketch looks like - anyone know where my misunderstanding may be coming from?
import processing.net.*; // include the networking library
Server server; // will receive predictions
Client client;
String messageText;
int dataIn;
PFont f;
void setup() {
fullScreen(P3D);
frameRate(600);
server = new Server(this, 5204); // listen on port 5204
client = server.available();
messageText = "NO HAND";
textAlign(CENTER);
fill(255);
f = createFont("Arial",48,true); // Arial, 16 point, anti-aliasing on
textFont(f, 120);
}
void draw() {
// draw
background(0);
text(messageText, width/2, height/2);
}
// If there is information available to read
// this event will be triggered
void clientEvent(Client client) {
String msg = client.readStringUntil('\n');
// The value of msg will be null until the
// end of the String is reached
if (msg != null) {
int val = int(trim(line)); // extract the predicted class
println(val);
if (val == 1) {
messageText = "A";
} else if (val == 2) {
messageText = "B";
} else if (val == 3) {
messageText = "C";
} else if (val == 4) {
messageText = "D";
}
}
}
}
So, the answer ended up having to do with the framerate and renderer used in the project. Since the network update code was being called in my sketch's draw method, the speed at which it was called was dependent on the framerate/renderer used.
After a bit of experimentation / trial-and-error, changing the sketch to use the FX2D renderer, and a framerate of 600, significantly improved performance to the degree at which I needed.
void setup()
{
fullScreen(FX2D);
frameRate(600);
...
}
EDIT:
After a conversation with one of the Processing core team members, I'm considering my networking code correct and complete. Changing the renderer to FX2D significantly improved performance.
In my very specific use case, I'm running the sketch full-screen on a MacBookPro with Retina display. Bumping the framerate value high, and changing the renderer, gave me the performance I required for my quick prototype sketches.

Track fast moving fiducial using BoofCV

I am trying to track a person's head with a binary fiducial printed. It can track fine when the person is moving slowly, but when they move their head quickly, it loses the track and then regains it when they stop moving. What can I do to track the person while they are moving quickly?
For reference, here is a screenshot and code:
camera = UtilWebcamCapture.openDefault(1920, 1080);
intrinsicParameters = new IntrinsicParameters();
intrinsicParameters.setCx(camera.getViewSize().getWidth()/2f);
intrinsicParameters.setCy(camera.getViewSize().getHeight()/2f);
intrinsicParameters.setFx(1);
intrinsicParameters.setFy(1);
intrinsicParameters.setWidth((int)camera.getViewSize().getWidth());
intrinsicParameters.setHeight((int)camera.getViewSize().getHeight());
detector = FactoryFiducial.squareBinary(
new ConfigFiducialBinary(1),
ConfigThreshold.local(ThresholdType.LOCAL_SQUARE, 10),
//ConfigThreshold.fixed(100),
GrayU8.class);
detector.setIntrinsic(intrinsicParameters);
...
while (true) {
BufferedImage image = camera.getImage();
GrayU8 input = ConvertBufferedImage.convertFrom(image, (GrayU8) null);
WorldToCameraToPixel transform;
try {
detector.detect(input);
Se3_F64 targetToSensor = new Se3_F64();
for (int i = 0; i < detector.totalFound(); i++) {
detector.getFiducialToCamera(i, targetToSensor);
transform = PerspectiveOps.createWorldToPixel(intrinsicParameters, targetToSensor);
Point2D_F64 centre = transform.transform(
new Point3D_F64(0, 0, 0));
System.out.println(centre);
}
} catch (Exception e) {
e.printStackTrace();
}
}
Thanks!
I solved this issue by creating an object tracker using the initial location of the fiducial, and using that when the user moves quickly.

Delay on loop without locking the screen

I've been working on an app where I need to get the location of the device. The thing is, I want to try at least 5 or 10 times and I want a delay in between each try, in order to show a error message in the screen after each try (FailedLocationMSG). The reason why I want this delay is because some times it takes some time to the gps start working and get the actual location. I've tried so many things and I can't make it work. The idea is to have an interface similar to a 'terminal' where I will display a message (Error, trying again 1/5.. Error 2/5....) after each try. The problem is, I've tried using Handler and Thread.sleep but I always get my screen locked and I can't see the error message displayed on the screen after each try.
This is the method where I get the location:
int breakloop=0;
private void GetLocation(){
locationmanager=(LocationManager)getSystemService(Context.LOCATION_SERVICE);
Criteria cri=new Criteria();
String provider=locationmanager.getBestProvider(cri,false);
if(provider!=null & !provider.equals("")){
//Get location
final Location location=locationmanager.getLastKnownLocation(provider);
locationmanager.requestLocationUpdates(provider,2000,10,this);
while(breakloop<10){
breakloop++;
if(location!=null)
onLocationChanged(location);
else
FailedLocationMSG(breakloop);
}
}else
Toast.makeText(getApplicationContext(),"Provider is null",Toast.LENGTH_LONG).show();
}
You could use a FutureTask and a Callable for your location check and wait for its return value.
public class LocationChecker implements Callable<Location> {
#Override
public Location call() throws Exception {
//do your stuff for location check
return location;
}
}
in your method you do something like
LocationChecker lc = new LocationChecker();
FutureTask<Location> ft = new FutureTask<Location>(lc);
ExecutorService es = Executors.newCachedThreadPool();
es.execute(ft);
Location loc;
int attempts = 0;
while(attempts < 10) {
if(ft.isDone() && loc != null) {
es.shutdown();
break;
}
else if(ft.isDone() && loc == null) {
ft = new FutureTask<Location>(lc);
es.execute(ft);
attempts++;
}
loc = ft.get();
}
This will check your location and will wait for the result (ten times).

Categories

Resources