Detect poses with ML Kit on Android

ML Kit provides two optimized SDKs for pose detection.

SDK Name pose-detection pose-detection-accurate
Implementation
Code and assets are statically linked to your app at build time. Code and assets are statically linked to your app at build time.
App size impact (including code and assets)
~10.1MB ~13.3MB
Performance
Pixel 3XL: ~30FPS Pixel 3XL: ~23FPS with CPU, ~30FPS with GPU

Try it out

  • Play around with the sample app to see an example usage of this API.

Before you begin

  1. In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.
  2. Add the dependencies for the ML Kit Android libraries to your module's app-level gradle file, which is usually app/build.gradle :

      dependencies 
      
     { 
      
     // If you want to use the base sdk 
      
     implementation 
      
     ' 
     com 
     . 
     google 
     . 
     mlkit 
     : 
     pose 
     - 
     detection 
     : 
     18.0.0 
     - 
     beta5 
     ' 
      
     // If you want to use the accurate sdk 
      
     implementation 
      
     ' 
     com 
     . 
     google 
     . 
     mlkit 
     : 
     pose 
     - 
     detection 
     - 
     accurate 
     : 
     18.0.0 
     - 
     beta5 
     ' 
     } 
     
    

1. Create an instance of PoseDetector

PoseDetector options

To detect a pose in an image, first create an instance of PoseDetector and optionally specify the detector settings.

Detection mode

The PoseDetector operates in two detection modes. Be sure you choose the one that matches your use case.

STREAM_MODE (default)
The pose detector will first detect the most prominent person in the image and then run pose detection. In subsequent frames, the person-detection step will not be conducted unless the person becomes obscured or is no longer detected with high confidence. The pose detector will attempt to track the most-prominent person and return their pose in each inference. This reduces latency and smooths detection. Use this mode when you want to detect pose in a video stream.
SINGLE_IMAGE_MODE
The pose detector will detect a person and then run pose detection. The person-detection step will run for every image, so latency will be higher, and there is no person-tracking. Use this mode when using pose detection on static images or where tracking is not desired.

Hardware config

The PoseDetector supports multiple hardware configurations for optimizing performance:

  • CPU : run the detector by using CPU only
  • CPU_GPU : run the detector by using both CPU and GPU

When building the detector options, you can use the API setPreferredHardwareConfigs to control the hardware selection. By default, all hardware configurations are set as preferred.

ML Kit will take availability, stability, correctness and latency of each config into consideration and pick the best one from the preferred configs. If none of the preferred configs is applicable, the CPU config will be used automatically as fallback. ML Kit will do these checks and related preparation in a non-blocking way before enabling any acceleration, so it is most likely the first time your user runs the detector, it will use CPU . After all the preparation finishes, the best config will be used in the following runs.

Example usages of setPreferredHardwareConfigs :

  • To let ML Kit pick the best config, do not call this API.
  • If you don't want to enable any acceleration, pass in only CPU .
  • If you want to use GPU to offload CPU even if GPU could be slower, pass in only CPU_GPU .

Specify the pose detector options:

Kotlin

 // Base pose detector with streaming frames, when depending on the pose-detection sdk 
 val 
  
 options 
  
 = 
  
 PoseDetectorOptions 
 . 
 Builder 
 () 
  
 . 
 setDetectorMode 
 ( 
 PoseDetectorOptions 
 . 
 STREAM_MODE 
 ) 
  
 . 
 build 
 () 
 // Accurate pose detector on static images, when depending on the pose-detection-accurate sdk 
 val 
  
 options 
  
 = 
  
 AccuratePoseDetectorOptions 
 . 
 Builder 
 () 
  
 . 
 setDetectorMode 
 ( 
 AccuratePoseDetectorOptions 
 . 
 SINGLE_IMAGE_MODE 
 ) 
  
 . 
 build 
 () 

Java

 // Base pose detector with streaming frames, when depending on the pose-detection sdk 
 PoseDetectorOptions 
  
 options 
  
 = 
  
 new 
  
 PoseDetectorOptions 
 . 
 Builder 
 () 
  
 . 
 setDetectorMode 
 ( 
 PoseDetectorOptions 
 . 
 STREAM_MODE 
 ) 
  
 . 
 build 
 (); 
 // Accurate pose detector on static images, when depending on the pose-detection-accurate sdk 
 AccuratePoseDetectorOptions 
  
 options 
  
 = 
  
 new 
  
 AccuratePoseDetectorOptions 
 . 
 Builder 
 () 
  
 . 
 setDetectorMode 
 ( 
 AccuratePoseDetectorOptions 
 . 
 SINGLE_IMAGE_MODE 
 ) 
  
 . 
 build 
 (); 

Finally, create an instance of PoseDetector . Pass the options you specified:

Kotlin

val poseDetector = PoseDetection.getClient(options)

Java

PoseDetector poseDetector = PoseDetection.getClient(options);

2. Prepare the input image

To detect poses in an image, create an InputImage object from either a Bitmap , media.Image , ByteBuffer , byte array, or a file on the device. Then, pass the InputImage object to the PoseDetector .

For pose detection, you should use an image with dimensions of at least 480x360pixels. If you are detecting poses in real time, capturing frames at this minimum resolution can help reduce latency.

You can create an InputImage object from different sources, each is explained below.

Using a media.Image

To create an InputImage object from a media.Image object, such as when you capture an image from a device's camera, pass the media.Image object and the image's rotation to InputImage.fromMediaImage() .

If you use the CameraX library, the OnImageCapturedListener and ImageAnalysis.Analyzer classes calculate the rotation value for you.

Kotlin

 private 
  
 class 
  
 YourImageAnalyzer 
  
 : 
  
 ImageAnalysis 
 . 
 Analyzer 
  
 { 
  
 override 
  
 fun 
  
 analyze 
 ( 
 imageProxy 
 : 
  
 ImageProxy 
 ) 
  
 { 
  
 val 
  
 mediaImage 
  
 = 
  
 imageProxy 
 . 
 image 
  
 if 
  
 ( 
 mediaImage 
  
 != 
  
 null 
 ) 
  
 { 
  
 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 imageProxy 
 . 
 imageInfo 
 . 
 rotationDegrees 
 ) 
  
 // Pass image to an ML Kit Vision API 
  
 // ... 
  
 } 
  
 } 
 } 

Java

 private 
  
 class 
 YourAnalyzer 
  
 implements 
  
 ImageAnalysis 
 . 
 Analyzer 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 analyze 
 ( 
 ImageProxy 
  
 imageProxy 
 ) 
  
 { 
  
 Image 
  
 mediaImage 
  
 = 
  
 imageProxy 
 . 
 getImage 
 (); 
  
 if 
  
 ( 
 mediaImage 
  
 != 
  
 null 
 ) 
  
 { 
  
 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 imageProxy 
 . 
 getImageInfo 
 (). 
 getRotationDegrees 
 ()); 
  
 // Pass image to an ML Kit Vision API 
  
 // ... 
  
 } 
  
 } 
 } 

If you don't use a camera library that gives you the image's rotation degree, you can calculate it from the device's rotation degree and the orientation of camera sensor in the device:

Kotlin

 private 
  
 val 
  
 ORIENTATIONS 
  
 = 
  
 SparseIntArray 
 () 
 init 
  
 { 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_0 
 , 
  
 0 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_90 
 , 
  
 90 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_180 
 , 
  
 180 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_270 
 , 
  
 270 
 ) 
 } 
 /** 
 * Get the angle by which an image must be rotated given the device's current 
 * orientation. 
 */ 
 @RequiresApi 
 ( 
 api 
  
 = 
  
 Build 
 . 
 VERSION_CODES 
 . 
 LOLLIPOP 
 ) 
 @Throws 
 ( 
 CameraAccessException 
 :: 
 class 
 ) 
 private 
  
 fun 
  
 getRotationCompensation 
 ( 
 cameraId 
 : 
  
 String 
 , 
  
 activity 
 : 
  
 Activity 
 , 
  
 isFrontFacing 
 : 
  
 Boolean 
 ): 
  
 Int 
  
 { 
  
 // Get the device's current rotation relative to its "native" orientation. 
  
 // Then, from the ORIENTATIONS table, look up the angle the image must be 
  
 // rotated to compensate for the device's rotation. 
  
 val 
  
 deviceRotation 
  
 = 
  
 activity 
 . 
 windowManager 
 . 
 defaultDisplay 
 . 
 rotation 
  
 var 
  
 rotationCompensation 
  
 = 
  
 ORIENTATIONS 
 . 
 get 
 ( 
 deviceRotation 
 ) 
  
 // Get the device's sensor orientation. 
  
 val 
  
 cameraManager 
  
 = 
  
 activity 
 . 
 getSystemService 
 ( 
 CAMERA_SERVICE 
 ) 
  
 as 
  
 CameraManager 
  
 val 
  
 sensorOrientation 
  
 = 
  
 cameraManager 
  
 . 
 getCameraCharacteristics 
 ( 
 cameraId 
 ) 
  
 . 
 get 
 ( 
 CameraCharacteristics 
 . 
 SENSOR_ORIENTATION 
 ) 
 !! 
  
 if 
  
 ( 
 isFrontFacing 
 ) 
  
 { 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 + 
  
 rotationCompensation 
 ) 
  
 % 
  
 360 
  
 } 
  
 else 
  
 { 
  
 // back-facing 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 - 
  
 rotationCompensation 
  
 + 
  
 360 
 ) 
  
 % 
  
 360 
  
 } 
  
 return 
  
 rotationCompensation 
 } 
  

Java

 private 
  
 static 
  
 final 
  
 SparseIntArray 
  
 ORIENTATIONS 
  
 = 
  
 new 
  
 SparseIntArray 
 (); 
 static 
  
 { 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_0 
 , 
  
 0 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_90 
 , 
  
 90 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_180 
 , 
  
 180 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_270 
 , 
  
 270 
 ); 
 } 
 /** 
 * Get the angle by which an image must be rotated given the device's current 
 * orientation. 
 */ 
 @RequiresApi 
 ( 
 api 
  
 = 
  
 Build 
 . 
 VERSION_CODES 
 . 
 LOLLIPOP 
 ) 
 private 
  
 int 
  
 getRotationCompensation 
 ( 
 String 
  
 cameraId 
 , 
  
 Activity 
  
 activity 
 , 
  
 boolean 
  
 isFrontFacing 
 ) 
  
 throws 
  
 CameraAccessException 
  
 { 
  
 // Get the device's current rotation relative to its "native" orientation. 
  
 // Then, from the ORIENTATIONS table, look up the angle the image must be 
  
 // rotated to compensate for the device's rotation. 
  
 int 
  
 deviceRotation 
  
 = 
  
 activity 
 . 
 getWindowManager 
 (). 
 getDefaultDisplay 
 (). 
 getRotation 
 (); 
  
 int 
  
 rotationCompensation 
  
 = 
  
 ORIENTATIONS 
 . 
 get 
 ( 
 deviceRotation 
 ); 
  
 // Get the device's sensor orientation. 
  
 CameraManager 
  
 cameraManager 
  
 = 
  
 ( 
 CameraManager 
 ) 
  
 activity 
 . 
 getSystemService 
 ( 
 CAMERA_SERVICE 
 ); 
  
 int 
  
 sensorOrientation 
  
 = 
  
 cameraManager 
  
 . 
 getCameraCharacteristics 
 ( 
 cameraId 
 ) 
  
 . 
 get 
 ( 
 CameraCharacteristics 
 . 
 SENSOR_ORIENTATION 
 ); 
  
 if 
  
 ( 
 isFrontFacing 
 ) 
  
 { 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 + 
  
 rotationCompensation 
 ) 
  
 % 
  
 360 
 ; 
  
 } 
  
 else 
  
 { 
  
 // back-facing 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 - 
  
 rotationCompensation 
  
 + 
  
 360 
 ) 
  
 % 
  
 360 
 ; 
  
 } 
  
 return 
  
 rotationCompensation 
 ; 
 } 

Then, pass the media.Image object and the rotation degree value to InputImage.fromMediaImage() :

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 rotation 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 rotation 
 ); 

Using a file URI

To create an InputImage object from a file URI, pass the app context and file URI to InputImage.fromFilePath() . This is useful when you use an ACTION_GET_CONTENT intent to prompt the user to select an image from their gallery app.

Kotlin

 val 
  
 image 
 : 
  
 InputImage 
 try 
  
 { 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromFilePath 
 ( 
 context 
 , 
  
 uri 
 ) 
 } 
  
 catch 
  
 ( 
 e 
 : 
  
 IOException 
 ) 
  
 { 
  
 e 
 . 
 printStackTrace 
 () 
 } 
  

Java

 InputImage 
  
 image 
 ; 
 try 
  
 { 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromFilePath 
 ( 
 context 
 , 
  
 uri 
 ); 
 } 
  
 catch 
  
 ( 
 IOException 
  
 e 
 ) 
  
 { 
  
 e 
 . 
 printStackTrace 
 (); 
 } 

Using a ByteBuffer or ByteArray

To create an InputImage object from a ByteBuffer or a ByteArray , first calculate the image rotation degree as previously described for media.Image input. Then, create the InputImage object with the buffer or array, together with image's height, width, color encoding format, and rotation degree:

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteBuffer 
 ( 
  
 byteBuffer 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ) 
  
 // Or: 
 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteArray 
 ( 
  
 byteArray 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteBuffer 
 ( 
 byteBuffer 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ); 
  
 // Or: 
 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteArray 
 ( 
  
 byteArray 
 , 
  
 /* image width */ 
 480 
 , 
  
 /* image height */ 
 360 
 , 
  
 rotation 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ); 
  

Using a Bitmap

To create an InputImage object from a Bitmap object, make the following declaration:

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromBitmap 
 ( 
 bitmap 
 , 
  
 0 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromBitmap 
 ( 
 bitmap 
 , 
  
 rotationDegree 
 ); 
  

The image is represented by a Bitmap object together with rotation degrees.

3. Process the image

Pass the prepared InputImage object to the PoseDetector 's process method.

Kotlin

Task<Pose> result = poseDetector.process(image)
       .addOnSuccessListener { results ->
           // Task completed successfully
           // ...
       }
       .addOnFailureListener { e ->
           // Task failed with an exception
           // ...
       }

Java

 Task<Pose> 
  
 result 
  
 = 
  
 poseDetector 
 . 
 process 
 ( 
 image 
 ) 
  
 . 
 addOnSuccessListener 
 ( 
  
 new 
  
 OnSuccessListener<Pose> 
 () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onSuccess 
 ( 
 Pose 
  
 pose 
 ) 
  
 { 
  
 // 
  
 Task 
  
 completed 
  
 successfully 
  
 // 
  
 ... 
  
 } 
  
 } 
 ) 
  
 . 
 addOnFailureListener 
 ( 
  
 new 
  
 OnFailureListener 
 () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onFailure 
 ( 
 @NonNull 
  
 Exception 
  
 e 
 ) 
  
 { 
  
 // 
  
 Task 
  
 failed 
  
 with 
  
 an 
  
 exception 
  
 // 
  
 ... 
  
 } 
  
 } 
 ); 

4. Get information about the detected pose

If a person is detected in the image, the pose detection API returns a Pose object with 33 PoseLandmark s.

If the person was not completely inside the image, the model assigns the missing landmarks coordinates outside the frame and gives them low InFrameConfidence values.

If no person was detected in the frame the Pose object contains no PoseLandmark s.

Kotlin

 // Get all PoseLandmarks. If no person was detected, the list will be empty 
 val 
  
 allPoseLandmarks 
  
 = 
  
 pose 
 . 
 getAllPoseLandmarks 
 () 
 // Or get specific PoseLandmarks individually. These will all be null if no person 
 // was detected 
 val 
  
 leftShoulder 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_SHOULDER 
 ) 
 val 
  
 rightShoulder 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_SHOULDER 
 ) 
 val 
  
 leftElbow 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_ELBOW 
 ) 
 val 
  
 rightElbow 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_ELBOW 
 ) 
 val 
  
 leftWrist 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_WRIST 
 ) 
 val 
  
 rightWrist 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_WRIST 
 ) 
 val 
  
 leftHip 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_HIP 
 ) 
 val 
  
 rightHip 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_HIP 
 ) 
 val 
  
 leftKnee 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_KNEE 
 ) 
 val 
  
 rightKnee 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_KNEE 
 ) 
 val 
  
 leftAnkle 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_ANKLE 
 ) 
 val 
  
 rightAnkle 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_ANKLE 
 ) 
 val 
  
 leftPinky 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_PINKY 
 ) 
 val 
  
 rightPinky 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_PINKY 
 ) 
 val 
  
 leftIndex 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_INDEX 
 ) 
 val 
  
 rightIndex 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_INDEX 
 ) 
 val 
  
 leftThumb 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_THUMB 
 ) 
 val 
  
 rightThumb 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_THUMB 
 ) 
 val 
  
 leftHeel 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_HEEL 
 ) 
 val 
  
 rightHeel 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_HEEL 
 ) 
 val 
  
 leftFootIndex 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_FOOT_INDEX 
 ) 
 val 
  
 rightFootIndex 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_FOOT_INDEX 
 ) 
 val 
  
 nose 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 NOSE 
 ) 
 val 
  
 leftEyeInner 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_EYE_INNER 
 ) 
 val 
  
 leftEye 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_EYE 
 ) 
 val 
  
 leftEyeOuter 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_EYE_OUTER 
 ) 
 val 
  
 rightEyeInner 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_EYE_INNER 
 ) 
 val 
  
 rightEye 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_EYE 
 ) 
 val 
  
 rightEyeOuter 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_EYE_OUTER 
 ) 
 val 
  
 leftEar 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_EAR 
 ) 
 val 
  
 rightEar 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_EAR 
 ) 
 val 
  
 leftMouth 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_MOUTH 
 ) 
 val 
  
 rightMouth 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_MOUTH 
 ) 

Java

 // Get all PoseLandmarks. If no person was detected, the list will be empty 
 List<PoseLandmark> 
  
 allPoseLandmarks 
  
 = 
  
 pose 
 . 
 getAllPoseLandmarks 
 (); 
 // Or get specific PoseLandmarks individually. These will all be null if no person 
 // was detected 
 PoseLandmark 
  
 leftShoulder 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_SHOULDER 
 ); 
 PoseLandmark 
  
 rightShoulder 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_SHOULDER 
 ); 
 PoseLandmark 
  
 leftElbow 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_ELBOW 
 ); 
 PoseLandmark 
  
 rightElbow 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_ELBOW 
 ); 
 PoseLandmark 
  
 leftWrist 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_WRIST 
 ); 
 PoseLandmark 
  
 rightWrist 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_WRIST 
 ); 
 PoseLandmark 
  
 leftHip 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_HIP 
 ); 
 PoseLandmark 
  
 rightHip 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_HIP 
 ); 
 PoseLandmark 
  
 leftKnee 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_KNEE 
 ); 
 PoseLandmark 
  
 rightKnee 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_KNEE 
 ); 
 PoseLandmark 
  
 leftAnkle 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_ANKLE 
 ); 
 PoseLandmark 
  
 rightAnkle 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_ANKLE 
 ); 
 PoseLandmark 
  
 leftPinky 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_PINKY 
 ); 
 PoseLandmark 
  
 rightPinky 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_PINKY 
 ); 
 PoseLandmark 
  
 leftIndex 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_INDEX 
 ); 
 PoseLandmark 
  
 rightIndex 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_INDEX 
 ); 
 PoseLandmark 
  
 leftThumb 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_THUMB 
 ); 
 PoseLandmark 
  
 rightThumb 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_THUMB 
 ); 
 PoseLandmark 
  
 leftHeel 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_HEEL 
 ); 
 PoseLandmark 
  
 rightHeel 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_HEEL 
 ); 
 PoseLandmark 
  
 leftFootIndex 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_FOOT_INDEX 
 ); 
 PoseLandmark 
  
 rightFootIndex 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_FOOT_INDEX 
 ); 
 PoseLandmark 
  
 nose 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 NOSE 
 ); 
 PoseLandmark 
  
 leftEyeInner 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_EYE_INNER 
 ); 
 PoseLandmark 
  
 leftEye 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_EYE 
 ); 
 PoseLandmark 
  
 leftEyeOuter 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_EYE_OUTER 
 ); 
 PoseLandmark 
  
 rightEyeInner 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_EYE_INNER 
 ); 
 PoseLandmark 
  
 rightEye 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_EYE 
 ); 
 PoseLandmark 
  
 rightEyeOuter 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_EYE_OUTER 
 ); 
 PoseLandmark 
  
 leftEar 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_EAR 
 ); 
 PoseLandmark 
  
 rightEar 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_EAR 
 ); 
 PoseLandmark 
  
 leftMouth 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 LEFT_MOUTH 
 ); 
 PoseLandmark 
  
 rightMouth 
  
 = 
  
 pose 
 . 
 getPoseLandmark 
 ( 
 PoseLandmark 
 . 
 RIGHT_MOUTH 
 ); 

Tips to improve performance

The quality of your results depends on the quality of the input image:

  • For ML Kit to accurately detect pose, the person in the image should be represented by sufficient pixel data; for best performance, the subject should be at least 256x256 pixels.
  • If you detect pose in a real-time application, you might also want to consider the overall dimensions of the input images. Smaller images can be processed faster, so to reduce latency, capture images at lower resolutions, but keep in mind the above resolution requirements and ensure that the subject occupies as much of the image as possible.
  • Poor image focus can also impact accuracy. If you don't get acceptable results, ask the user to recapture the image.

If you want to use pose detection in a real-time application, follow these guidelines to achieve the best framerates:

  • Use the base pose-detection sdk and STREAM_MODE .
  • Consider capturing images at a lower resolution. However, also keep in mind this API's image dimension requirements.
  • If you use the Camera or camera2 API, throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame. See the VisionProcessorBase class in the quickstart sample app for an example.
  • If you use the CameraX API, be sure that backpressure strategy is set to its default value ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST . This guarantees only one image will be delivered for analysis at a time. If more images are produced when the analyzer is busy, they will be dropped automatically and not queued for delivery. Once the image being analyzed is closed by calling ImageProxy.close(), the next latest image will be delivered.
  • If you use the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. This renders to the display surface only once for each input frame. See the CameraSourcePreview and GraphicOverlay classes in the quickstart sample app for an example.
  • If you use the Camera2 API, capture images in ImageFormat.YUV_420_888 format. If you use the older Camera API, capture images in ImageFormat.NV21 format.

Next steps

Design a Mobile Site
View Site in Mobile | Classic
Share by: