Subject segmentation with ML Kit for Android

Use the ML Kit to easily add subject segmentation features to your app.

Feature Details
Sdk name play-services-mlkit-subject-segmentation
Implementation Unbundled: the model is dynamically downloaded using Google Play services.
App size impact ~200 KB size increase.
Initialization time Users might have to wait for the model to download before first use.

Try it out

  • Play around with the sample app to see an example usage of this API.

Before you begin

  1. In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.
  2. Add the dependency for the ML Kit subject segmentation library to your module's app-level gradle file, which is usually app/build.gradle :
  dependencies 
  
 { 
  
 implementation 
  
 ' 
 com 
 . 
 google 
 . 
 android 
 . 
 gms 
 : 
 play 
 - 
 services 
 - 
 mlkit 
 - 
 subject 
 - 
 segmentation 
 : 
 16.0.0 
 - 
 beta1 
 ' 
 } 
 

As mentioned above the model is provided by Google Play services. You can configure your app to automatically download the model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:

 < application 
 ... 
> ... 
< meta 
 - 
 data 
 android 
 : 
 name 
 = 
 "com.google.mlkit.vision.DEPENDENCIES" 
 android 
 : 
 value 
 = 
 "subject_segment" 
>
      < ! 
 -- 
 To 
 use 
 multiple 
 models 
 : 
 android 
 : 
 value 
 = 
 "subject_segment,model2,model3" 
 -- 
>
< / 
 application 
> 

You can also explicitly check the model availability and request download through Google Play services with ModuleInstallClient API .

If you don't enable install-time model downloads or request explicit download the model is downloaded the first time you run the segmenter. Requests you make before the download has completed produce no results.

1. Prepare the input image

To perform segmentation on an image, create an InputImage object from either a Bitmap , media.Image , ByteBuffer , byte array, or a file on the device.

You can create an InputImage object from different sources, each is explained below.

Using a media.Image

To create an InputImage object from a media.Image object, such as when you capture an image from a device's camera, pass the media.Image object and the image's rotation to InputImage.fromMediaImage() .

If you use the CameraX library, the OnImageCapturedListener and ImageAnalysis.Analyzer classes calculate the rotation value for you.

Kotlin

 private 
  
 class 
  
 YourImageAnalyzer 
  
 : 
  
 ImageAnalysis 
 . 
 Analyzer 
  
 { 
  
 override 
  
 fun 
  
 analyze 
 ( 
 imageProxy 
 : 
  
 ImageProxy 
 ) 
  
 { 
  
 val 
  
 mediaImage 
  
 = 
  
 imageProxy 
 . 
 image 
  
 if 
  
 ( 
 mediaImage 
  
 != 
  
 null 
 ) 
  
 { 
  
 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 imageProxy 
 . 
 imageInfo 
 . 
 rotationDegrees 
 ) 
  
 // Pass image to an ML Kit Vision API 
  
 // ... 
  
 } 
  
 } 
 } 

Java

 private 
  
 class 
 YourAnalyzer 
  
 implements 
  
 ImageAnalysis 
 . 
 Analyzer 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 analyze 
 ( 
 ImageProxy 
  
 imageProxy 
 ) 
  
 { 
  
 Image 
  
 mediaImage 
  
 = 
  
 imageProxy 
 . 
 getImage 
 (); 
  
 if 
  
 ( 
 mediaImage 
  
 != 
  
 null 
 ) 
  
 { 
  
 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 imageProxy 
 . 
 getImageInfo 
 (). 
 getRotationDegrees 
 ()); 
  
 // Pass image to an ML Kit Vision API 
  
 // ... 
  
 } 
  
 } 
 } 

If you don't use a camera library that gives you the image's rotation degree, you can calculate it from the device's rotation degree and the orientation of camera sensor in the device:

Kotlin

 private 
  
 val 
  
 ORIENTATIONS 
  
 = 
  
 SparseIntArray 
 () 
 init 
  
 { 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_0 
 , 
  
 0 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_90 
 , 
  
 90 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_180 
 , 
  
 180 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_270 
 , 
  
 270 
 ) 
 } 
 /** 
 * Get the angle by which an image must be rotated given the device's current 
 * orientation. 
 */ 
 @RequiresApi 
 ( 
 api 
  
 = 
  
 Build 
 . 
 VERSION_CODES 
 . 
 LOLLIPOP 
 ) 
 @Throws 
 ( 
 CameraAccessException 
 :: 
 class 
 ) 
 private 
  
 fun 
  
 getRotationCompensation 
 ( 
 cameraId 
 : 
  
 String 
 , 
  
 activity 
 : 
  
 Activity 
 , 
  
 isFrontFacing 
 : 
  
 Boolean 
 ): 
  
 Int 
  
 { 
  
 // Get the device's current rotation relative to its "native" orientation. 
  
 // Then, from the ORIENTATIONS table, look up the angle the image must be 
  
 // rotated to compensate for the device's rotation. 
  
 val 
  
 deviceRotation 
  
 = 
  
 activity 
 . 
 windowManager 
 . 
 defaultDisplay 
 . 
 rotation 
  
 var 
  
 rotationCompensation 
  
 = 
  
 ORIENTATIONS 
 . 
 get 
 ( 
 deviceRotation 
 ) 
  
 // Get the device's sensor orientation. 
  
 val 
  
 cameraManager 
  
 = 
  
 activity 
 . 
 getSystemService 
 ( 
 CAMERA_SERVICE 
 ) 
  
 as 
  
 CameraManager 
  
 val 
  
 sensorOrientation 
  
 = 
  
 cameraManager 
  
 . 
 getCameraCharacteristics 
 ( 
 cameraId 
 ) 
  
 . 
 get 
 ( 
 CameraCharacteristics 
 . 
 SENSOR_ORIENTATION 
 ) 
 !! 
  
 if 
  
 ( 
 isFrontFacing 
 ) 
  
 { 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 + 
  
 rotationCompensation 
 ) 
  
 % 
  
 360 
  
 } 
  
 else 
  
 { 
  
 // back-facing 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 - 
  
 rotationCompensation 
  
 + 
  
 360 
 ) 
  
 % 
  
 360 
  
 } 
  
 return 
  
 rotationCompensation 
 } 
  

Java

 private 
  
 static 
  
 final 
  
 SparseIntArray 
  
 ORIENTATIONS 
  
 = 
  
 new 
  
 SparseIntArray 
 (); 
 static 
  
 { 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_0 
 , 
  
 0 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_90 
 , 
  
 90 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_180 
 , 
  
 180 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_270 
 , 
  
 270 
 ); 
 } 
 /** 
 * Get the angle by which an image must be rotated given the device's current 
 * orientation. 
 */ 
 @RequiresApi 
 ( 
 api 
  
 = 
  
 Build 
 . 
 VERSION_CODES 
 . 
 LOLLIPOP 
 ) 
 private 
  
 int 
  
 getRotationCompensation 
 ( 
 String 
  
 cameraId 
 , 
  
 Activity 
  
 activity 
 , 
  
 boolean 
  
 isFrontFacing 
 ) 
  
 throws 
  
 CameraAccessException 
  
 { 
  
 // Get the device's current rotation relative to its "native" orientation. 
  
 // Then, from the ORIENTATIONS table, look up the angle the image must be 
  
 // rotated to compensate for the device's rotation. 
  
 int 
  
 deviceRotation 
  
 = 
  
 activity 
 . 
 getWindowManager 
 (). 
 getDefaultDisplay 
 (). 
 getRotation 
 (); 
  
 int 
  
 rotationCompensation 
  
 = 
  
 ORIENTATIONS 
 . 
 get 
 ( 
 deviceRotation 
 ); 
  
 // Get the device's sensor orientation. 
  
 CameraManager 
  
 cameraManager 
  
 = 
  
 ( 
 CameraManager 
 ) 
  
 activity 
 . 
 getSystemService 
 ( 
 CAMERA_SERVICE 
 ); 
  
 int 
  
 sensorOrientation 
  
 = 
  
 cameraManager 
  
 . 
 getCameraCharacteristics 
 ( 
 cameraId 
 ) 
  
 . 
 get 
 ( 
 CameraCharacteristics 
 . 
 SENSOR_ORIENTATION 
 ); 
  
 if 
  
 ( 
 isFrontFacing 
 ) 
  
 { 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 + 
  
 rotationCompensation 
 ) 
  
 % 
  
 360 
 ; 
  
 } 
  
 else 
  
 { 
  
 // back-facing 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 - 
  
 rotationCompensation 
  
 + 
  
 360 
 ) 
  
 % 
  
 360 
 ; 
  
 } 
  
 return 
  
 rotationCompensation 
 ; 
 } 

Then, pass the media.Image object and the rotation degree value to InputImage.fromMediaImage() :

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 rotation 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 rotation 
 ); 

Using a file URI

To create an InputImage object from a file URI, pass the app context and file URI to InputImage.fromFilePath() . This is useful when you use an ACTION_GET_CONTENT intent to prompt the user to select an image from their gallery app.

Kotlin

 val 
  
 image 
 : 
  
 InputImage 
 try 
  
 { 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromFilePath 
 ( 
 context 
 , 
  
 uri 
 ) 
 } 
  
 catch 
  
 ( 
 e 
 : 
  
 IOException 
 ) 
  
 { 
  
 e 
 . 
 printStackTrace 
 () 
 } 
  

Java

 InputImage 
  
 image 
 ; 
 try 
  
 { 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromFilePath 
 ( 
 context 
 , 
  
 uri 
 ); 
 } 
  
 catch 
  
 ( 
 IOException 
  
 e 
 ) 
  
 { 
  
 e 
 . 
 printStackTrace 
 (); 
 } 

Using a ByteBuffer or ByteArray

To create an InputImage object from a ByteBuffer or a ByteArray , first calculate the image rotation degree as previously described for media.Image input. Then, create the InputImage object with the buffer or array, together with image's height, width, color encoding format, and rotation degree:

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteBuffer 
 ( 
  
 byteBuffer 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ) 
  
 // Or: 
 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteArray 
 ( 
  
 byteArray 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteBuffer 
 ( 
 byteBuffer 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ); 
  
 // Or: 
 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteArray 
 ( 
  
 byteArray 
 , 
  
 /* image width */ 
 480 
 , 
  
 /* image height */ 
 360 
 , 
  
 rotation 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ); 
  

Using a Bitmap

To create an InputImage object from a Bitmap object, make the following declaration:

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromBitmap 
 ( 
 bitmap 
 , 
  
 0 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromBitmap 
 ( 
 bitmap 
 , 
  
 rotationDegree 
 ); 
  

The image is represented by a Bitmap object together with rotation degrees.

2. Create an instance of SubjectSegmenter

Define the segmenter options

To segment your image, first create an instance of SubjectSegmenterOptions as follow:

Kotlin

val options = SubjectSegmenterOptions.Builder()
       // enable options
       .build()

Java

SubjectSegmenterOptions options = new SubjectSegmenterOptions.Builder()
        // enable options
        .build();

Here is the detail of each option:

Foreground confidence mask

The foreground confidence mask lets you distinguish the foreground subject from the background.

Call enableForegroundConfidenceMask() in the options lets you later retrieve the foreground mask by calling getForegroundMask() on the SubjectSegmentationResult object returned after processing the image.

Kotlin

 val 
  
 options 
  
 = 
  
 SubjectSegmenterOptions 
 . 
 Builder 
 () 
  
 . 
 enableForegroundConfidenceMask 
 () 
  
 . 
 build 
 () 

Java

 SubjectSegmenterOptions 
  
 options 
  
 = 
  
 new 
  
 SubjectSegmenterOptions 
 . 
 Builder 
 () 
  
 . 
 enableForegroundConfidenceMask 
 () 
  
 . 
 build 
 (); 
Foreground bitmap

Similarly, you can also get a bitmap of the foreground subject.

Call enableForegroundBitmap() in the options lets you to later retrieve the foreground bitmap by calling getForegroundBitmap() on the SubjectSegmentationResult object returned after processing the image.

Kotlin

 val 
  
 options 
  
 = 
  
 SubjectSegmenterOptions 
 . 
 Builder 
 () 
  
 . 
 enableForegroundBitmap 
 () 
  
 . 
 build 
 () 

Java

 SubjectSegmenterOptions 
  
 options 
  
 = 
  
 new 
  
 SubjectSegmenterOptions 
 . 
 Builder 
 () 
  
 . 
 enableForegroundBitmap 
 () 
  
 . 
 build 
 (); 
Multi-subject confidence mask

Like for the foreground options, you can use the SubjectResultOptions to enable the confidence mask for each foreground subject as follow:

Kotlin

val subjectResultOptions = SubjectSegmenterOptions.SubjectResultOptions.Builder()
    .enableConfidenceMask()
    .build()

val options = SubjectSegmenterOptions.Builder()
    .enableMultipleSubjects(subjectResultOptions)
    .build()

Java

SubjectResultOptions subjectResultOptions =
        new SubjectSegmenterOptions.SubjectResultOptions.Builder()
            .enableConfidenceMask()
            .build()

SubjectSegmenterOptions options = new SubjectSegmenterOptions.Builder()
      .enableMultipleSubjects(subjectResultOptions)
      .build()
Multi-subject bitmap

And similarly, you can enable the bitmap for each subject:

Kotlin

val subjectResultOptions = SubjectSegmenterOptions.SubjectResultOptions.Builder()
    .enableSubjectBitmap()
    .build()

val options = SubjectSegmenterOptions.Builder()
    .enableMultipleSubjects(subjectResultOptions)
    .build()

Java

SubjectResultOptions subjectResultOptions =
      new SubjectSegmenterOptions.SubjectResultOptions.Builder()
        .enableSubjectBitmap()
        .build()

SubjectSegmenterOptions options = new SubjectSegmenterOptions.Builder()
      .enableMultipleSubjects(subjectResultOptions)
      .build()

Create the subject segmenter

Once you specified the SubjectSegmenterOptions options, create a SubjectSegmenter instance calling getClient() and passing the options as a parameter:

Kotlin

val segmenter = SubjectSegmentation.getClient(options)

Java

SubjectSegmenter segmenter = SubjectSegmentation.getClient(options);

3. Process an image

Pass the prepared InputImage object to the SubjectSegmenter 's process method:

Kotlin

 segmenter 
 . 
 process 
 ( 
 inputImage 
 ) 
  
 . 
 addOnSuccessListener 
  
 { 
  
 result 
  
 -> 
  
 // Task completed successfully 
  
 // ... 
  
 } 
  
 . 
 addOnFailureListener 
  
 { 
  
 e 
  
 -> 
  
 // Task failed with an exception 
  
 // ... 
  
 } 

Java

 segmenter 
 . 
 process 
 ( 
 inputImage 
 ) 
  
 . 
 addOnSuccessListener 
 ( 
 new 
  
 OnSuccessListener 
  () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onSuccess 
 ( 
 SubjectSegmentationResult 
  
 result 
 ) 
  
 { 
  
 // 
  
 Task 
  
 completed 
  
 successfully 
  
 // 
  
 ... 
  
 } 
  
 } 
 ) 
  
 . 
 addOnFailureListener 
 ( 
 new 
  
 OnFailureListener 
 () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onFailure 
 ( 
 @NonNull 
  
 Exception 
  
 e 
 ) 
  
 { 
  
 // 
  
 Task 
  
 failed 
  
 with 
  
 an 
  
 exception 
  
 // 
  
 ... 
  
 } 
  
 } 
 ); 
 

4. Get the subject segmentation result

Retrieve foreground masks and bitmaps

Once processed, you can retrieve the foreground mask for your image calling getForegroundConfidenceMask() as follow:

Kotlin

 val 
  
 colors 
  
 = 
  
 IntArray 
 ( 
 image 
 . 
 width 
  
 * 
  
 image 
 . 
 height 
 ) 
 val 
  
 foregroundMask 
  
 = 
  
 result 
 . 
 foregroundConfidenceMask 
 for 
  
 ( 
 i 
  
 in 
  
 0 
  
 until 
  
 image 
 . 
 width 
  
 * 
  
 image 
 . 
 height 
 ) 
  
 { 
  
 if 
  
 ( 
 foregroundMask 
 [ 
 i 
 ] 
  
 > 
  
 0.5 
 f 
 ) 
  
 { 
  
 colors 
 [ 
 i 
 ] 
  
 = 
  
 Color 
 . 
 argb 
 ( 
 128 
 , 
  
 255 
 , 
  
 0 
 , 
  
 255 
 ) 
  
 } 
 } 
 val 
  
 bitmapMask 
  
 = 
  
 Bitmap 
 . 
 createBitmap 
 ( 
  
 colors 
 , 
  
 image 
 . 
 width 
 , 
  
 image 
 . 
 height 
 , 
  
 Bitmap 
 . 
 Config 
 . 
 ARGB_8888 
 ) 

Java

 int 
 [] 
  
 colors 
  
 = 
  
 new 
  
 int 
 [ 
 image.getWidth() * image.getHeight() 
 ] 
 ; 
 FloatBuffer 
  
 foregroundMask 
  
 = 
  
 result 
 . 
 getForegroundConfidenceMask 
 (); 
 for 
  
 ( 
 int 
  
 i 
  
 = 
  
 0 
 ; 
  
 i 
  
 < 
  
 image 
 . 
 getWidth 
 () 
  
 * 
  
 image 
 . 
 getHeight 
 (); 
  
 i 
 ++ 
 ) 
  
 { 
  
 if 
  
 ( 
 foregroundMask 
 . 
 get 
 () 
  
 > 
  
 0.5 
 f 
 ) 
  
 { 
  
 colors 
 [ 
 i 
 ] 
  
 = 
  
 Color 
 . 
 argb 
 ( 
 128 
 , 
  
 255 
 , 
  
 0 
 , 
  
 255 
 ); 
  
 } 
 } 
 Bitmap 
  
 bitmapMask 
  
 = 
  
 Bitmap 
 . 
 createBitmap 
 ( 
  
 colors 
 , 
  
 image 
 . 
 getWidth 
 (), 
  
 image 
 . 
 getHeight 
 (), 
  
 Bitmap 
 . 
 Config 
 . 
 ARGB_8888 
 ); 

You can also retrieve a bitmap of the foreground of the image calling getForegroundBitmap() :

Kotlin

 val 
  
 foregroundBitmap 
  
 = 
  
 result 
 . 
 foregroundBitmap 

Java

 Bitmap 
  
 foregroundBitmap 
  
 = 
  
 result 
 . 
 getForegroundBitmap 
 (); 

Retrieve masks and bitmaps for each subject

Similarly, you can retrieve the mask for the segmented subjects by calling getConfidenceMask() on each subject as follow:

Kotlin

 val 
  
 subjects 
  
 = 
  
 result 
 . 
 subjects 
 val 
  
 colors 
  
 = 
  
 IntArray 
 ( 
 image 
 . 
 width 
  
 * 
  
 image 
 . 
 height 
 ) 
 for 
  
 ( 
 subject 
  
 in 
  
 subjects 
 ) 
  
 { 
  
 val 
  
 mask 
  
 = 
  
 subject 
 . 
 confidenceMask 
  
 for 
  
 ( 
 i 
  
 in 
  
 0 
  
 until 
  
 subject 
 . 
 width 
  
 * 
  
 subject 
 . 
 height 
 ) 
  
 { 
  
 val 
  
 confidence 
  
 = 
  
 mask 
 [ 
 i 
 ] 
  
 if 
  
 ( 
 confidence 
  
 > 
  
 0.5 
 f 
 ) 
  
 { 
  
 colors 
 [ 
 image.width * (subject.startY - 1) + subject.startX 
 ] 
  
 = 
  
 Color 
 . 
 argb 
 ( 
 128 
 , 
  
 255 
 , 
  
 0 
 , 
  
 255 
 ) 
  
 } 
  
 } 
 } 
 val 
  
 bitmapMask 
  
 = 
  
 Bitmap 
 . 
 createBitmap 
 ( 
  
 colors 
 , 
  
 image 
 . 
 width 
 , 
  
 image 
 . 
 height 
 , 
  
 Bitmap 
 . 
 Config 
 . 
 ARGB_8888 
 ) 

Java

List subjects = result.getSubjects();

int[] colors = new int[image.getWidth() * image.getHeight()]; 
 for (Subject subject : subjects) { 
 FloatBuffer mask = subject.getConfidenceMask(); 
 for (int i = 0; i < subject.getWidth() * 
subject.getHeight(); i++) {
    float confidence = mask.get();
    if (confidence > 0.5f) {
      colors[width * (subject.getStartY() - 1) + subject.getStartX()]
          = Color.argb(128, 255, 0, 255);
    }
  }
}

Bitmap bitmapMask = Bitmap.createBitmap(
  colors, image.width, image.height, Bitmap.Config.ARGB_8888
); 

You can also access the bitmap of each segmented subject as follow:

Kotlin

val bitmaps = mutableListOf ()
for (subject in subjects) {
  bitmaps.add(subject.bitmap)
} 

Java

List bitmaps = new ArrayList<>();
for (Subject subject : subjects) {
  bitmaps.add(subject.getBitmap());
} 

Tips to improve performance

For each app session, the first inference is often slower than subsequent inferences due to model initialization. If low latency is critical, consider calling a "dummy" inference ahead of time.

The quality of your results depends on the quality of the input image:

  • For ML Kit to get an accurate segmentation result, the image should be at least 512x512 pixels.
  • Poor image focus can also impact accuracy. If you don't get acceptable results, ask the user to recapture the image.
Design a Mobile Site
View Site in Mobile | Classic
Share by: