Detect faces with ML Kit on Android

You can use ML Kit to detect faces in images and video.

Feature Unbundled Bundled
Implementation
Model is dynamically downloaded via Google Play Services. Model is statically linked to your app at build time.
App size
About 800 KB size increase. About 6.9 MB size increase.
Initialization time
Might have to wait for model to download before first use. Model is available immediately

Try it out

  • Play around with the sample app to see an example usage of this API.
  • Try the code yourself with the codelab .

Before you begin

  1. In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.

  2. Add the dependencies for the ML Kit Android libraries to your module's app-level gradle file, which is usually app/build.gradle . Choose one of the following dependencies based on your needs:

    For bundling the model with your app:

      dependencies 
      
     { 
      
     // ... 
      
     // Use this dependency to bundle the model with your app 
      
     implementation 
      
     ' 
     com 
     . 
     google 
     . 
     mlkit 
     : 
     face 
     - 
     detection 
     : 
     16.1.7 
     ' 
     } 
     
    

    For using the model in Google Play Services:

      dependencies 
      
     { 
      
     // 
      
     ... 
      
     // 
      
     Use 
      
     this 
      
     dependency 
      
     to 
      
     use 
      
     the 
      
     dynamically 
      
     downloaded 
      
     model 
      
     in 
      
     Google 
      
     Play 
      
     Services 
      
     implementation 
      
     'com.google.android.gms:play-services-mlkit-face-detection:17.1.0' 
     } 
     
    
  3. If you choose to use the model in Google Play Services, you can configure your app to automatically download the model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:

     < application 
     ... 
    > ... 
    < meta 
     - 
     data 
     android 
     : 
     name 
     = 
     "com.google.mlkit.vision.DEPENDENCIES" 
     android 
     : 
     value 
     = 
     "face" 
    >
          < ! 
     -- 
     To 
     use 
     multiple 
     models 
     : 
     android 
     : 
     value 
     = 
     "face,model2,model3" 
     -- 
    >
    < / 
     application 
    > 
    

    You can also explicitly check the model availability and request download through Google Play services ModuleInstallClient API .

    If you don't enable install-time model downloads or request explicit download, the model is downloaded the first time you run the detector. Requests you make before the download has completed produce no results.

Input image guidelines

For face recognition, you should use an image with dimensions of at least 480x360 pixels. For ML Kit to accurately detect faces, input images must contain faces that are represented by sufficient pixel data. In general, each face you want to detect in an image should be at least 100x100 pixels. If you want to detect the contours of faces, ML Kit requires higher resolution input: each face should be at least 200x200 pixels.

If you detect faces in a real-time application, you might also want to consider the overall dimensions of the input images. Smaller images can be processed faster, so to reduce latency, capture images at lower resolutions, but keep in mind the above accuracy requirements and ensure that the subject's face occupies as much of the image as possible. Also see tips to improve real-time performance .

Poor image focus can also impact accuracy. If you don't get acceptable results, ask the user to recapture the image.

The orientation of a face relative to the camera can also affect what facial features ML Kit detects. See Face Detection Concepts .

1. Configure the face detector

Before you apply face detection to an image, if you want to change any of the face detector's default settings, specify those settings with a FaceDetectorOptions object. You can change the following settings:
Settings
PERFORMANCE_MODE_FAST (default) | PERFORMANCE_MODE_ACCURATE

Favor speed or accuracy when detecting faces.

LANDMARK_MODE_NONE (default) | LANDMARK_MODE_ALL

Whether to attempt to identify facial "landmarks": eyes, ears, nose, cheeks, mouth, and so on.

CONTOUR_MODE_NONE (default) | CONTOUR_MODE_ALL

Whether to detect the contours of facial features. Contours are detected for only the most prominent face in an image.

CLASSIFICATION_MODE_NONE (default) | CLASSIFICATION_MODE_ALL

Whether or not to classify faces into categories such as "smiling", and "eyes open".

float (default: 0.1f )

Sets the smallest desired face size, expressed as the ratio of the width of the head to width of the image.

false (default) | true

Whether or not to assign faces an ID, which can be used to track faces across images.

Note that when contour detection is enabled, only one face is detected, so face tracking doesn't produce useful results. For this reason, and to improve detection speed, don't enable both contour detection and face tracking.

For example:

Kotlin

 // High-accuracy landmark detection and face classification 
 val 
  
 highAccuracyOpts 
  
 = 
  
 FaceDetectorOptions 
 . 
 Builder 
 () 
  
 . 
 setPerformanceMode 
 ( 
 FaceDetectorOptions 
 . 
 PERFORMANCE_MODE_ACCURATE 
 ) 
  
 . 
 setLandmarkMode 
 ( 
 FaceDetectorOptions 
 . 
 LANDMARK_MODE_ALL 
 ) 
  
 . 
 setClassificationMode 
 ( 
 FaceDetectorOptions 
 . 
 CLASSIFICATION_MODE_ALL 
 ) 
  
 . 
 build 
 () 
 // Real-time contour detection 
 val 
  
 realTimeOpts 
  
 = 
  
 FaceDetectorOptions 
 . 
 Builder 
 () 
  
 . 
 setContourMode 
 ( 
 FaceDetectorOptions 
 . 
 CONTOUR_MODE_ALL 
 ) 
  
 . 
 build 
 () 
  

Java

 // High-accuracy landmark detection and face classification 
 FaceDetectorOptions 
  
 highAccuracyOpts 
  
 = 
  
 new 
  
 FaceDetectorOptions 
 . 
 Builder 
 () 
  
 . 
 setPerformanceMode 
 ( 
 FaceDetectorOptions 
 . 
 PERFORMANCE_MODE_ACCURATE 
 ) 
  
 . 
 setLandmarkMode 
 ( 
 FaceDetectorOptions 
 . 
 LANDMARK_MODE_ALL 
 ) 
  
 . 
 setClassificationMode 
 ( 
 FaceDetectorOptions 
 . 
 CLASSIFICATION_MODE_ALL 
 ) 
  
 . 
 build 
 (); 
 // Real-time contour detection 
 FaceDetectorOptions 
  
 realTimeOpts 
  
 = 
  
 new 
  
 FaceDetectorOptions 
 . 
 Builder 
 () 
  
 . 
 setContourMode 
 ( 
 FaceDetectorOptions 
 . 
 CONTOUR_MODE_ALL 
 ) 
  
 . 
 build 
 (); 
  

2. Prepare the input image

To detect faces in an image, create an InputImage object from either a Bitmap , media.Image , ByteBuffer , byte array, or a file on the device. Then, pass the InputImage object to the FaceDetector 's process method.

For face detection, you should use an image with dimensions of at least 480x360pixels. If you are detecting faces in real time, capturing frames at this minimum resolution can help reduce latency.

You can create an InputImage object from different sources, each is explained below.

Using a media.Image

To create an InputImage object from a media.Image object, such as when you capture an image from a device's camera, pass the media.Image object and the image's rotation to InputImage.fromMediaImage() .

If you use the CameraX library, the OnImageCapturedListener and ImageAnalysis.Analyzer classes calculate the rotation value for you.

Kotlin

 private 
  
 class 
  
 YourImageAnalyzer 
  
 : 
  
 ImageAnalysis 
 . 
 Analyzer 
  
 { 
  
 override 
  
 fun 
  
 analyze 
 ( 
 imageProxy 
 : 
  
 ImageProxy 
 ) 
  
 { 
  
 val 
  
 mediaImage 
  
 = 
  
 imageProxy 
 . 
 image 
  
 if 
  
 ( 
 mediaImage 
  
 != 
  
 null 
 ) 
  
 { 
  
 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 imageProxy 
 . 
 imageInfo 
 . 
 rotationDegrees 
 ) 
  
 // Pass image to an ML Kit Vision API 
  
 // ... 
  
 } 
  
 } 
 } 

Java

 private 
  
 class 
 YourAnalyzer 
  
 implements 
  
 ImageAnalysis 
 . 
 Analyzer 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 analyze 
 ( 
 ImageProxy 
  
 imageProxy 
 ) 
  
 { 
  
 Image 
  
 mediaImage 
  
 = 
  
 imageProxy 
 . 
 getImage 
 (); 
  
 if 
  
 ( 
 mediaImage 
  
 != 
  
 null 
 ) 
  
 { 
  
 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 imageProxy 
 . 
 getImageInfo 
 (). 
 getRotationDegrees 
 ()); 
  
 // Pass image to an ML Kit Vision API 
  
 // ... 
  
 } 
  
 } 
 } 

If you don't use a camera library that gives you the image's rotation degree, you can calculate it from the device's rotation degree and the orientation of camera sensor in the device:

Kotlin

 private 
  
 val 
  
 ORIENTATIONS 
  
 = 
  
 SparseIntArray 
 () 
 init 
  
 { 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_0 
 , 
  
 0 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_90 
 , 
  
 90 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_180 
 , 
  
 180 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_270 
 , 
  
 270 
 ) 
 } 
 /** 
 * Get the angle by which an image must be rotated given the device's current 
 * orientation. 
 */ 
 @RequiresApi 
 ( 
 api 
  
 = 
  
 Build 
 . 
 VERSION_CODES 
 . 
 LOLLIPOP 
 ) 
 @Throws 
 ( 
 CameraAccessException 
 :: 
 class 
 ) 
 private 
  
 fun 
  
 getRotationCompensation 
 ( 
 cameraId 
 : 
  
 String 
 , 
  
 activity 
 : 
  
 Activity 
 , 
  
 isFrontFacing 
 : 
  
 Boolean 
 ): 
  
 Int 
  
 { 
  
 // Get the device's current rotation relative to its "native" orientation. 
  
 // Then, from the ORIENTATIONS table, look up the angle the image must be 
  
 // rotated to compensate for the device's rotation. 
  
 val 
  
 deviceRotation 
  
 = 
  
 activity 
 . 
 windowManager 
 . 
 defaultDisplay 
 . 
 rotation 
  
 var 
  
 rotationCompensation 
  
 = 
  
 ORIENTATIONS 
 . 
 get 
 ( 
 deviceRotation 
 ) 
  
 // Get the device's sensor orientation. 
  
 val 
  
 cameraManager 
  
 = 
  
 activity 
 . 
 getSystemService 
 ( 
 CAMERA_SERVICE 
 ) 
  
 as 
  
 CameraManager 
  
 val 
  
 sensorOrientation 
  
 = 
  
 cameraManager 
  
 . 
 getCameraCharacteristics 
 ( 
 cameraId 
 ) 
  
 . 
 get 
 ( 
 CameraCharacteristics 
 . 
 SENSOR_ORIENTATION 
 ) 
 !! 
  
 if 
  
 ( 
 isFrontFacing 
 ) 
  
 { 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 + 
  
 rotationCompensation 
 ) 
  
 % 
  
 360 
  
 } 
  
 else 
  
 { 
  
 // back-facing 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 - 
  
 rotationCompensation 
  
 + 
  
 360 
 ) 
  
 % 
  
 360 
  
 } 
  
 return 
  
 rotationCompensation 
 } 
  

Java

 private 
  
 static 
  
 final 
  
 SparseIntArray 
  
 ORIENTATIONS 
  
 = 
  
 new 
  
 SparseIntArray 
 (); 
 static 
  
 { 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_0 
 , 
  
 0 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_90 
 , 
  
 90 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_180 
 , 
  
 180 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_270 
 , 
  
 270 
 ); 
 } 
 /** 
 * Get the angle by which an image must be rotated given the device's current 
 * orientation. 
 */ 
 @RequiresApi 
 ( 
 api 
  
 = 
  
 Build 
 . 
 VERSION_CODES 
 . 
 LOLLIPOP 
 ) 
 private 
  
 int 
  
 getRotationCompensation 
 ( 
 String 
  
 cameraId 
 , 
  
 Activity 
  
 activity 
 , 
  
 boolean 
  
 isFrontFacing 
 ) 
  
 throws 
  
 CameraAccessException 
  
 { 
  
 // Get the device's current rotation relative to its "native" orientation. 
  
 // Then, from the ORIENTATIONS table, look up the angle the image must be 
  
 // rotated to compensate for the device's rotation. 
  
 int 
  
 deviceRotation 
  
 = 
  
 activity 
 . 
 getWindowManager 
 (). 
 getDefaultDisplay 
 (). 
 getRotation 
 (); 
  
 int 
  
 rotationCompensation 
  
 = 
  
 ORIENTATIONS 
 . 
 get 
 ( 
 deviceRotation 
 ); 
  
 // Get the device's sensor orientation. 
  
 CameraManager 
  
 cameraManager 
  
 = 
  
 ( 
 CameraManager 
 ) 
  
 activity 
 . 
 getSystemService 
 ( 
 CAMERA_SERVICE 
 ); 
  
 int 
  
 sensorOrientation 
  
 = 
  
 cameraManager 
  
 . 
 getCameraCharacteristics 
 ( 
 cameraId 
 ) 
  
 . 
 get 
 ( 
 CameraCharacteristics 
 . 
 SENSOR_ORIENTATION 
 ); 
  
 if 
  
 ( 
 isFrontFacing 
 ) 
  
 { 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 + 
  
 rotationCompensation 
 ) 
  
 % 
  
 360 
 ; 
  
 } 
  
 else 
  
 { 
  
 // back-facing 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 - 
  
 rotationCompensation 
  
 + 
  
 360 
 ) 
  
 % 
  
 360 
 ; 
  
 } 
  
 return 
  
 rotationCompensation 
 ; 
 } 

Then, pass the media.Image object and the rotation degree value to InputImage.fromMediaImage() :

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 rotation 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 rotation 
 ); 

Using a file URI

To create an InputImage object from a file URI, pass the app context and file URI to InputImage.fromFilePath() . This is useful when you use an ACTION_GET_CONTENT intent to prompt the user to select an image from their gallery app.

Kotlin

 val 
  
 image 
 : 
  
 InputImage 
 try 
  
 { 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromFilePath 
 ( 
 context 
 , 
  
 uri 
 ) 
 } 
  
 catch 
  
 ( 
 e 
 : 
  
 IOException 
 ) 
  
 { 
  
 e 
 . 
 printStackTrace 
 () 
 } 
  

Java

 InputImage 
  
 image 
 ; 
 try 
  
 { 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromFilePath 
 ( 
 context 
 , 
  
 uri 
 ); 
 } 
  
 catch 
  
 ( 
 IOException 
  
 e 
 ) 
  
 { 
  
 e 
 . 
 printStackTrace 
 (); 
 } 

Using a ByteBuffer or ByteArray

To create an InputImage object from a ByteBuffer or a ByteArray , first calculate the image rotation degree as previously described for media.Image input. Then, create the InputImage object with the buffer or array, together with image's height, width, color encoding format, and rotation degree:

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteBuffer 
 ( 
  
 byteBuffer 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ) 
  
 // Or: 
 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteArray 
 ( 
  
 byteArray 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteBuffer 
 ( 
 byteBuffer 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ); 
  
 // Or: 
 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteArray 
 ( 
  
 byteArray 
 , 
  
 /* image width */ 
 480 
 , 
  
 /* image height */ 
 360 
 , 
  
 rotation 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ); 
  

Using a Bitmap

To create an InputImage object from a Bitmap object, make the following declaration:

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromBitmap 
 ( 
 bitmap 
 , 
  
 0 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromBitmap 
 ( 
 bitmap 
 , 
  
 rotationDegree 
 ); 
  

The image is represented by a Bitmap object together with rotation degrees.

3. Get an instance of FaceDetector

Kotlin

 val 
  
 detector 
  
 = 
  
 FaceDetection 
 . 
 getClient 
 ( 
 options 
 ) 
 // Or, to use the default option: 
 // val detector = FaceDetection.getClient();  
 

Java

 FaceDetector 
  
 detector 
  
 = 
  
 FaceDetection 
 . 
 getClient 
 ( 
 options 
 ); 
 // Or use the default options: 
 // FaceDetector detector = FaceDetection.getClient();  
 

4. Process the image

Pass the image to the process method:

Kotlin

 val 
  
 result 
  
 = 
  
 detector 
 . 
 process 
 ( 
 image 
 ) 
  
 . 
 addOnSuccessListener 
  
 { 
  
 faces 
  
 - 
>  
 // Task completed successfully 
  
 // ... 
  
 } 
  
 . 
 addOnFailureListener 
  
 { 
  
 e 
  
 - 
>  
 // Task failed with an exception 
  
 // ... 
  
 } 
  

Java

 Task<List<Face> 
>  
 result 
  
 = 
  
 detector 
 . 
 process 
 ( 
 image 
 ) 
  
 . 
 addOnSuccessListener 
 ( 
  
 new 
  
 OnSuccessListener<List<Face> 
> () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onSuccess 
 ( 
 List<Face> 
  
 faces 
 ) 
  
 { 
  
 // Task completed successfully 
  
 // ... 
  
 } 
  
 }) 
  
 . 
 addOnFailureListener 
 ( 
  
 new 
  
 OnFailureListener 
 () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onFailure 
 ( 
 @NonNull 
  
 Exception 
  
 e 
 ) 
  
 { 
  
 // Task failed with an exception 
  
 // ... 
  
 } 
  
 }); 
  

5. Get information about detected faces

If the face detection operation succeeds, a list of Face objects are passed to the success listener. Each Face object represents a face that was detected in the image. For each face, you can get its bounding coordinates in the input image, as well as any other information you configured the face detector to find. For example:

Kotlin

 for 
  
 ( 
 face 
  
 in 
  
 faces 
 ) 
  
 { 
  
 val 
  
 bounds 
  
 = 
  
 face 
 . 
 boundingBox 
  
 val 
  
 rotY 
  
 = 
  
 face 
 . 
 headEulerAngleY 
  
 // Head is rotated to the right rotY degrees 
  
 val 
  
 rotZ 
  
 = 
  
 face 
 . 
 headEulerAngleZ 
  
 // Head is tilted sideways rotZ degrees 
  
 // If landmark detection was enabled (mouth, ears, eyes, cheeks, and 
  
 // nose available): 
  
 val 
  
 leftEar 
  
 = 
  
 face 
 . 
 getLandmark 
 ( 
 FaceLandmark 
 . 
 LEFT_EAR 
 ) 
  
 leftEar 
 ?. 
 let 
  
 { 
  
 val 
  
 leftEarPos 
  
 = 
  
 leftEar 
 . 
 position 
  
 } 
  
 // If contour detection was enabled: 
  
 val 
  
 leftEyeContour 
  
 = 
  
 face 
 . 
 getContour 
 ( 
 FaceContour 
 . 
 LEFT_EYE 
 ) 
 ?. 
 points 
  
 val 
  
 upperLipBottomContour 
  
 = 
  
 face 
 . 
 getContour 
 ( 
 FaceContour 
 . 
 UPPER_LIP_BOTTOM 
 ) 
 ?. 
 points 
  
 // If classification was enabled: 
  
 if 
  
 ( 
 face 
 . 
 smilingProbability 
  
 != 
  
 null 
 ) 
  
 { 
  
 val 
  
 smileProb 
  
 = 
  
 face 
 . 
 smilingProbability 
  
 } 
  
 if 
  
 ( 
 face 
 . 
 rightEyeOpenProbability 
  
 != 
  
 null 
 ) 
  
 { 
  
 val 
  
 rightEyeOpenProb 
  
 = 
  
 face 
 . 
 rightEyeOpenProbability 
  
 } 
  
 // If face tracking was enabled: 
  
 if 
  
 ( 
 face 
 . 
 trackingId 
  
 != 
  
 null 
 ) 
  
 { 
  
 val 
  
 id 
  
 = 
  
 face 
 . 
 trackingId 
  
 } 
 } 
  

Java

 for 
  
 ( 
 Face 
  
 face 
  
 : 
  
 faces 
 ) 
  
 { 
  
 Rect 
  
 bounds 
  
 = 
  
 face 
 . 
 getBoundingBox 
 (); 
  
 float 
  
 rotY 
  
 = 
  
 face 
 . 
 getHeadEulerAngleY 
 (); 
  
 // Head is rotated to the right rotY degrees 
  
 float 
  
 rotZ 
  
 = 
  
 face 
 . 
 getHeadEulerAngleZ 
 (); 
  
 // Head is tilted sideways rotZ degrees 
  
 // If landmark detection was enabled (mouth, ears, eyes, cheeks, and 
  
 // nose available): 
  
 FaceLandmark 
  
 leftEar 
  
 = 
  
 face 
 . 
 getLandmark 
 ( 
 FaceLandmark 
 . 
 LEFT_EAR 
 ); 
  
 if 
  
 ( 
 leftEar 
  
 != 
  
 null 
 ) 
  
 { 
  
 PointF 
  
 leftEarPos 
  
 = 
  
 leftEar 
 . 
 getPosition 
 (); 
  
 } 
  
 // If contour detection was enabled: 
  
 List<PointF> 
  
 leftEyeContour 
  
 = 
  
 face 
 . 
 getContour 
 ( 
 FaceContour 
 . 
 LEFT_EYE 
 ). 
 getPoints 
 (); 
  
 List<PointF> 
  
 upperLipBottomContour 
  
 = 
  
 face 
 . 
 getContour 
 ( 
 FaceContour 
 . 
 UPPER_LIP_BOTTOM 
 ). 
 getPoints 
 (); 
  
 // If classification was enabled: 
  
 if 
  
 ( 
 face 
 . 
 getSmilingProbability 
 () 
  
 != 
  
 null 
 ) 
  
 { 
  
 float 
  
 smileProb 
  
 = 
  
 face 
 . 
 getSmilingProbability 
 (); 
  
 } 
  
 if 
  
 ( 
 face 
 . 
 getRightEyeOpenProbability 
 () 
  
 != 
  
 null 
 ) 
  
 { 
  
 float 
  
 rightEyeOpenProb 
  
 = 
  
 face 
 . 
 getRightEyeOpenProbability 
 (); 
  
 } 
  
 // If face tracking was enabled: 
  
 if 
  
 ( 
 face 
 . 
 getTrackingId 
 () 
  
 != 
  
 null 
 ) 
  
 { 
  
 int 
  
 id 
  
 = 
  
 face 
 . 
 getTrackingId 
 (); 
  
 } 
 } 
  

Example of face contours

When you have face contour detection enabled, you get a list of points for each facial feature that was detected. These points represent the shape of the feature. See Face Detection Concepts for details about how contours are represented.

The following image illustrates how these points map to a face, click the image to enlarge it:

example detected face contour mesh

Real-time face detection

If you want to use face detection in a real-time application, follow these guidelines to achieve the best framerates:

  • Configure the face detector to use either face contour detection or classification and landmark detection, but not both:

    Contour detection
    Landmark detection
    Classification
    Landmark detection and classification
    Contour detection and landmark detection
    Contour detection and classification
    Contour detection, landmark detection, and classification

  • Enable FAST mode (enabled by default).

  • Consider capturing images at a lower resolution. However, also keep in mind this API's image dimension requirements.

  • If you use the Camera or camera2 API, throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame. See the VisionProcessorBase class in the quickstart sample app for an example.
  • If you use the CameraX API, be sure that backpressure strategy is set to its default value ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST . This guarantees only one image will be delivered for analysis at a time. If more images are produced when the analyzer is busy, they will be dropped automatically and not queued for delivery. Once the image being analyzed is closed by calling ImageProxy.close(), the next latest image will be delivered.
  • If you use the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. This renders to the display surface only once for each input frame. See the CameraSourcePreview and GraphicOverlay classes in the quickstart sample app for an example.
  • If you use the Camera2 API, capture images in ImageFormat.YUV_420_888 format. If you use the older Camera API, capture images in ImageFormat.NV21 format.
  • Create a Mobile Website
    View Site in Mobile | Classic
    Share by: