Detect, track and classify objects with a custom classification model on Android

You can use ML Kit to detect and track objects in successive video frames.

When you pass an image to ML Kit, it detects up to five objects in the image along with the position of each object in the image. When detecting objects in video streams, each object has a unique ID that you can use to track the object from frame to frame.

You can use a custom image classification model to classify the objects that are detected. Please refer to Custom models with ML Kit for guidance on model compatibility requirements, where to find pre-trained models, and how to train your own models.

There are two ways to integrate a custom model. You can bundle the model by putting it inside your app’s asset folder, or you can dynamically download it from Firebase. The following table compares the two options.

Bundled Model Hosted Model
The model is part of your app's APK, which increases its size. The model is not part your APK. It is hosted by uploading to Firebase Machine Learning .
The model is available immediately, even when the Android device is offline The model is downloaded on demand
No need for a Firebase project Requires a Firebase project
You must republish your app to update the model Push model updates without republishing your app
No built-in A/B testing Easy A/B testing with Firebase Remote Config

Try it out

Before you begin

  1. In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.

  2. Add the dependencies for the ML Kit Android libraries to your module's app-level gradle file, which is usually app/build.gradle :

    For bundling a model with your app:

      dependencies 
      
     { 
      
     // ... 
      
     // Object detection & tracking feature with custom bundled model 
      
     implementation 
      
     ' 
     com 
     . 
     google 
     . 
     mlkit 
     : 
     object 
     - 
     detection 
     - 
     custom 
     : 
     17.0.2 
     ' 
     } 
     
    

    For dynamically downloading a model from Firebase, add the linkFirebase dependency:

      dependencies 
      
     { 
      
     // 
      
     ... 
      
     // 
      
     Object 
      
     detection 
     & 
     tracking 
      
     feature 
      
     with 
      
     model 
      
     downloaded 
      
     // 
      
     from 
      
     firebase 
      
     implementation 
      
     'com.google.mlkit:object-detection-custom:17.0.2' 
      
     implementation 
      
     'com.google.mlkit:linkfirebase:17.0.0' 
     } 
     
    
  3. If you want to download a model, make sure you add Firebase to your Android project , if you have not already done so. This is not required when you bundle the model.

1. Load the model

Configure a local model source

To bundle the model with your app:

  1. Copy the model file (usually ending in .tflite or .lite ) to your app's assets/ folder. (You might need to create the folder first by right-clicking the app/ folder, then clicking New > Folder > Assets Folder.)

  2. Then, add the following to your app's build.gradle file to ensure Gradle doesn’t compress the model file when building the app:

     android {
        // ...
        aaptOptions {
            noCompress "tflite"
            // or noCompress "lite"
        }
    } 
    

    The model file will be included in the app package and available to ML Kit as a raw asset.

  3. Create LocalModel object, specifying the path to the model file:

    Kotlin

     val 
      
     localModel 
      
     = 
      
     LocalModel 
     . 
     Builder 
     () 
      
     . 
     setAssetFilePath 
     ( 
     "model.tflite" 
     ) 
      
     // or .setAbsoluteFilePath(absolute file path to model file) 
      
     // or .setUri(URI to model file) 
      
     . 
     build 
     () 
    

    Java

     LocalModel 
      
     localModel 
      
     = 
      
     new 
      
     LocalModel 
     . 
     Builder 
     () 
      
     . 
     setAssetFilePath 
     ( 
     "model.tflite" 
     ) 
      
     // or .setAbsoluteFilePath(absolute file path to model file) 
      
     // or .setUri(URI to model file) 
      
     . 
     build 
     (); 
    

Configure a Firebase-hosted model source

To use the remotely-hosted model, create a CustomRemoteModel object by FirebaseModelSource , specifying the name you assigned the model when you published it:

Kotlin

 // Specify the name you assigned in the Firebase console. 
 val 
  
 remoteModel 
  
 = 
  
 CustomRemoteModel 
  
 . 
 Builder 
 ( 
 FirebaseModelSource 
 . 
 Builder 
 ( 
 "your_model_name" 
 ). 
 build 
 ()) 
  
 . 
 build 
 () 

Java

 // Specify the name you assigned in the Firebase console. 
 CustomRemoteModel 
  
 remoteModel 
  
 = 
  
 new 
  
 CustomRemoteModel 
  
 . 
 Builder 
 ( 
 new 
  
 FirebaseModelSource 
 . 
 Builder 
 ( 
 "your_model_name" 
 ). 
 build 
 ()) 
  
 . 
 build 
 (); 

Then, start the model download task, specifying the conditions under which you want to allow downloading. If the model isn't on the device, or if a newer version of the model is available, the task will asynchronously download the model from Firebase:

Kotlin

 val 
  
 downloadConditions 
  
 = 
  
 DownloadConditions 
 . 
 Builder 
 () 
  
 . 
 requireWifi 
 () 
  
 . 
 build 
 () 
 RemoteModelManager 
 . 
 getInstance 
 (). 
 download 
 ( 
 remoteModel 
 , 
  
 downloadConditions 
 ) 
  
 . 
 addOnSuccessListener 
  
 { 
  
 // Success. 
  
 } 

Java

 DownloadConditions 
  
 downloadConditions 
  
 = 
  
 new 
  
 DownloadConditions 
 . 
 Builder 
 () 
  
 . 
 requireWifi 
 () 
  
 . 
 build 
 (); 
 RemoteModelManager 
 . 
 getInstance 
 (). 
 download 
 ( 
 remoteModel 
 , 
  
 downloadConditions 
 ) 
  
 . 
 addOnSuccessListener 
 ( 
 new 
  
 OnSuccessListener 
  () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onSuccess 
 ( 
 @NonNull 
  
 Task 
   
 task 
 ) 
  
 { 
  
 // Success. 
  
 } 
  
 }); 
 
 

Many apps start the download task in their initialization code, but you can do so at any point before you need to use the model.

2. Configure the object detector

After you configure your model sources, configure the object detector for your use case with a CustomObjectDetectorOptions object. You can change the following settings:

Object Detector Settings
Detection mode
STREAM_MODE (default) | SINGLE_IMAGE_MODE

In STREAM_MODE (default), the object detector runs with low latency, but might produce incomplete results (such as unspecified bounding boxes or category labels) on the first few invocations of the detector. Also, in STREAM_MODE , the detector assigns tracking IDs to objects, which you can use to track objects across frames. Use this mode when you want to track objects, or when low latency is important, such as when processing video streams in real time.

In SINGLE_IMAGE_MODE , the object detector returns the result after the object's bounding box is determined. If you also enable classification it returns the result after the bounding box and category label are both available. As a consequence, detection latency is potentially higher. Also, in SINGLE_IMAGE_MODE , tracking IDs are not assigned. Use this mode if latency isn't critical and you don't want to deal with partial results.

Detect and track multiple objects
false (default) | true

Whether to detect and track up to five objects or only the most prominent object (default).

Classify objects
false (default) | true

Whether or not to classify detected objects by using the provided custom classifier model. To use your custom classification model, you need to set this to true .

Classification confidence threshold

Minimum confidence score of detected labels. If not set, any classifier threshold specified by the model’s metadata will be used. If the model does not contain any metadata or the metadata does not specify a classifier threshold, a default threshold of 0.0 will be used.

Maximum labels per object

Maximum number of labels per object that the detector will return. If not set, the default value of 10 will be used.

The object detection and tracking API is optimized for these two core use cases:

  • Live detection and tracking of the most prominent object in the camera viewfinder.
  • The detection of multiple objects from a static image.

To configure the API for these use cases, with a locally-bundled model:

Kotlin

 // Live detection and tracking 
 val 
  
 customObjectDetectorOptions 
  
 = 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
 ( 
 localModel 
 ) 
  
 . 
 setDetectorMode 
 ( 
 CustomObjectDetectorOptions 
 . 
 STREAM_MODE 
 ) 
  
 . 
 enableClassification 
 () 
  
 . 
 setClassificationConfidenceThreshold 
 ( 
 0.5f 
 ) 
  
 . 
 setMaxPerObjectLabelCount 
 ( 
 3 
 ) 
  
 . 
 build 
 () 
 // Multiple object detection in static images 
 val 
  
 customObjectDetectorOptions 
  
 = 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
 ( 
 localModel 
 ) 
  
 . 
 setDetectorMode 
 ( 
 CustomObjectDetectorOptions 
 . 
 SINGLE_IMAGE_MODE 
 ) 
  
 . 
 enableMultipleObjects 
 () 
  
 . 
 enableClassification 
 () 
  
 . 
 setClassificationConfidenceThreshold 
 ( 
 0.5f 
 ) 
  
 . 
 setMaxPerObjectLabelCount 
 ( 
 3 
 ) 
  
 . 
 build 
 () 
 val 
  
 objectDetector 
  
 = 
  
 ObjectDetection 
 . 
 getClient 
 ( 
 customObjectDetectorOptions 
 ) 

Java

 // Live detection and tracking 
 CustomObjectDetectorOptions 
  
 customObjectDetectorOptions 
  
 = 
  
 new 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
 ( 
 localModel 
 ) 
  
 . 
 setDetectorMode 
 ( 
 CustomObjectDetectorOptions 
 . 
 STREAM_MODE 
 ) 
  
 . 
 enableClassification 
 () 
  
 . 
 setClassificationConfidenceThreshold 
 ( 
 0.5f 
 ) 
  
 . 
 setMaxPerObjectLabelCount 
 ( 
 3 
 ) 
  
 . 
 build 
 (); 
 // Multiple object detection in static images 
 CustomObjectDetectorOptions 
  
 customObjectDetectorOptions 
  
 = 
  
 new 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
 ( 
 localModel 
 ) 
  
 . 
 setDetectorMode 
 ( 
 CustomObjectDetectorOptions 
 . 
 SINGLE_IMAGE_MODE 
 ) 
  
 . 
 enableMultipleObjects 
 () 
  
 . 
 enableClassification 
 () 
  
 . 
 setClassificationConfidenceThreshold 
 ( 
 0.5f 
 ) 
  
 . 
 setMaxPerObjectLabelCount 
 ( 
 3 
 ) 
  
 . 
 build 
 (); 
 ObjectDetector 
  
 objectDetector 
  
 = 
  
 ObjectDetection 
 . 
 getClient 
 ( 
 customObjectDetectorOptions 
 ); 

If you have a remotely-hosted model, you will have to check that it has been downloaded before you run it. You can check the status of the model download task using the model manager's isModelDownloaded() method.

Although you only have to confirm this before running the detector, if you have both a remotely-hosted model and a locally-bundled model, it might make sense to perform this check when instantiating the image detector: create a detector from the remote model if it's been downloaded, and from the local model otherwise.

Kotlin

 RemoteModelManager 
 . 
 getInstance 
 (). 
 isModelDownloaded 
 ( 
 remoteModel 
 ) 
  
 . 
 addOnSuccessListener 
  
 { 
  
 isDownloaded 
  
 -> 
  
 val 
  
 optionsBuilder 
  
 = 
  
 if 
  
 ( 
 isDownloaded 
 ) 
  
 { 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
 ( 
 remoteModel 
 ) 
  
 } 
  
 else 
  
 { 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
 ( 
 localModel 
 ) 
  
 } 
  
 val 
  
 customObjectDetectorOptions 
  
 = 
  
 optionsBuilder 
  
 . 
 setDetectorMode 
 ( 
 CustomObjectDetectorOptions 
 . 
 SINGLE_IMAGE_MODE 
 ) 
  
 . 
 enableClassification 
 () 
  
 . 
 setClassificationConfidenceThreshold 
 ( 
 0.5f 
 ) 
  
 . 
 setMaxPerObjectLabelCount 
 ( 
 3 
 ) 
  
 . 
 build 
 () 
  
 val 
  
 objectDetector 
  
 = 
  
 ObjectDetection 
 . 
 getClient 
 ( 
 customObjectDetectorOptions 
 ) 
 } 

Java

 RemoteModelManager 
 . 
 getInstance 
 (). 
 isModelDownloaded 
 ( 
 remoteModel 
 ) 
  
 . 
 addOnSuccessListener 
 ( 
 new 
  
 OnSuccessListener 
  () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onSuccess 
 ( 
 Boolean 
  
 isDownloaded 
 ) 
  
 { 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
  
 optionsBuilder 
 ; 
  
 if 
  
 ( 
 isDownloaded 
 ) 
  
 { 
  
 optionsBuilder 
  
 = 
  
 new 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
 ( 
 remoteModel 
 ); 
  
 } 
  
 else 
  
 { 
  
 optionsBuilder 
  
 = 
  
 new 
  
 CustomObjectDetectorOptions 
 . 
 Builder 
 ( 
 localModel 
 ); 
  
 } 
  
 CustomObjectDetectorOptions 
  
 customObjectDetectorOptions 
  
 = 
  
 optionsBuilder 
  
 . 
 setDetectorMode 
 ( 
 CustomObjectDetectorOptions 
 . 
 SINGLE_IMAGE_MODE 
 ) 
  
 . 
 enableClassification 
 () 
  
 . 
 setClassificationConfidenceThreshold 
 ( 
 0.5f 
 ) 
  
 . 
 setMaxPerObjectLabelCount 
 ( 
 3 
 ) 
  
 . 
 build 
 (); 
  
 ObjectDetector 
  
 objectDetector 
  
 = 
  
 ObjectDetection 
 . 
 getClient 
 ( 
 customObjectDetectorOptions 
 ); 
  
 } 
 }); 
 

If you only have a remotely-hosted model, you should disable model-related functionality—for example, grey-out or hide part of your UI—until you confirm the model has been downloaded. You can do so by attaching a listener to the model manager's download() method:

Kotlin

 RemoteModelManager 
 . 
 getInstance 
 (). 
 download 
 ( 
 remoteModel 
 , 
  
 conditions 
 ) 
  
 . 
 addOnSuccessListener 
  
 { 
  
 // Download complete. Depending on your app, you could enable the ML 
  
 // feature, or switch from the local model to the remote model, etc. 
  
 } 

Java

 RemoteModelManager 
 . 
 getInstance 
 (). 
 download 
 ( 
 remoteModel 
 , 
  
 conditions 
 ) 
  
 . 
 addOnSuccessListener 
 ( 
 new 
  
 OnSuccessListener 
  () 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 onSuccess 
 ( 
 Void 
  
 v 
 ) 
  
 { 
  
 // Download complete. Depending on your app, you could enable 
  
 // the ML feature, or switch from the local model to the remote 
  
 // model, etc. 
  
 } 
  
 }); 
 

3. Prepare the input image

Create an InputImage object from your image. The object detector runs directly from a Bitmap , NV21 ByteBuffer or a YUV_420_888 media.Image . Constructing an InputImage from those sources is recommended if you have direct access to one of them. If you construct an InputImage from other sources, we will handle the conversion internally for you and it might be less efficient.

You can create an InputImage object from different sources, each is explained below.

Using a media.Image

To create an InputImage object from a media.Image object, such as when you capture an image from a device's camera, pass the media.Image object and the image's rotation to InputImage.fromMediaImage() .

If you use the CameraX library, the OnImageCapturedListener and ImageAnalysis.Analyzer classes calculate the rotation value for you.

Kotlin

 private 
  
 class 
  
 YourImageAnalyzer 
  
 : 
  
 ImageAnalysis 
 . 
 Analyzer 
  
 { 
  
 override 
  
 fun 
  
 analyze 
 ( 
 imageProxy 
 : 
  
 ImageProxy 
 ) 
  
 { 
  
 val 
  
 mediaImage 
  
 = 
  
 imageProxy 
 . 
 image 
  
 if 
  
 ( 
 mediaImage 
  
 != 
  
 null 
 ) 
  
 { 
  
 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 imageProxy 
 . 
 imageInfo 
 . 
 rotationDegrees 
 ) 
  
 // Pass image to an ML Kit Vision API 
  
 // ... 
  
 } 
  
 } 
 } 

Java

 private 
  
 class 
 YourAnalyzer 
  
 implements 
  
 ImageAnalysis 
 . 
 Analyzer 
  
 { 
  
 @Override 
  
 public 
  
 void 
  
 analyze 
 ( 
 ImageProxy 
  
 imageProxy 
 ) 
  
 { 
  
 Image 
  
 mediaImage 
  
 = 
  
 imageProxy 
 . 
 getImage 
 (); 
  
 if 
  
 ( 
 mediaImage 
  
 != 
  
 null 
 ) 
  
 { 
  
 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 imageProxy 
 . 
 getImageInfo 
 (). 
 getRotationDegrees 
 ()); 
  
 // Pass image to an ML Kit Vision API 
  
 // ... 
  
 } 
  
 } 
 } 

If you don't use a camera library that gives you the image's rotation degree, you can calculate it from the device's rotation degree and the orientation of camera sensor in the device:

Kotlin

 private 
  
 val 
  
 ORIENTATIONS 
  
 = 
  
 SparseIntArray 
 () 
 init 
  
 { 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_0 
 , 
  
 0 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_90 
 , 
  
 90 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_180 
 , 
  
 180 
 ) 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_270 
 , 
  
 270 
 ) 
 } 
 /** 
 * Get the angle by which an image must be rotated given the device's current 
 * orientation. 
 */ 
 @RequiresApi 
 ( 
 api 
  
 = 
  
 Build 
 . 
 VERSION_CODES 
 . 
 LOLLIPOP 
 ) 
 @Throws 
 ( 
 CameraAccessException 
 :: 
 class 
 ) 
 private 
  
 fun 
  
 getRotationCompensation 
 ( 
 cameraId 
 : 
  
 String 
 , 
  
 activity 
 : 
  
 Activity 
 , 
  
 isFrontFacing 
 : 
  
 Boolean 
 ): 
  
 Int 
  
 { 
  
 // Get the device's current rotation relative to its "native" orientation. 
  
 // Then, from the ORIENTATIONS table, look up the angle the image must be 
  
 // rotated to compensate for the device's rotation. 
  
 val 
  
 deviceRotation 
  
 = 
  
 activity 
 . 
 windowManager 
 . 
 defaultDisplay 
 . 
 rotation 
  
 var 
  
 rotationCompensation 
  
 = 
  
 ORIENTATIONS 
 . 
 get 
 ( 
 deviceRotation 
 ) 
  
 // Get the device's sensor orientation. 
  
 val 
  
 cameraManager 
  
 = 
  
 activity 
 . 
 getSystemService 
 ( 
 CAMERA_SERVICE 
 ) 
  
 as 
  
 CameraManager 
  
 val 
  
 sensorOrientation 
  
 = 
  
 cameraManager 
  
 . 
 getCameraCharacteristics 
 ( 
 cameraId 
 ) 
  
 . 
 get 
 ( 
 CameraCharacteristics 
 . 
 SENSOR_ORIENTATION 
 ) 
 !! 
  
 if 
  
 ( 
 isFrontFacing 
 ) 
  
 { 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 + 
  
 rotationCompensation 
 ) 
  
 % 
  
 360 
  
 } 
  
 else 
  
 { 
  
 // back-facing 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 - 
  
 rotationCompensation 
  
 + 
  
 360 
 ) 
  
 % 
  
 360 
  
 } 
  
 return 
  
 rotationCompensation 
 } 
  

Java

 private 
  
 static 
  
 final 
  
 SparseIntArray 
  
 ORIENTATIONS 
  
 = 
  
 new 
  
 SparseIntArray 
 (); 
 static 
  
 { 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_0 
 , 
  
 0 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_90 
 , 
  
 90 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_180 
 , 
  
 180 
 ); 
  
 ORIENTATIONS 
 . 
 append 
 ( 
 Surface 
 . 
 ROTATION_270 
 , 
  
 270 
 ); 
 } 
 /** 
 * Get the angle by which an image must be rotated given the device's current 
 * orientation. 
 */ 
 @RequiresApi 
 ( 
 api 
  
 = 
  
 Build 
 . 
 VERSION_CODES 
 . 
 LOLLIPOP 
 ) 
 private 
  
 int 
  
 getRotationCompensation 
 ( 
 String 
  
 cameraId 
 , 
  
 Activity 
  
 activity 
 , 
  
 boolean 
  
 isFrontFacing 
 ) 
  
 throws 
  
 CameraAccessException 
  
 { 
  
 // Get the device's current rotation relative to its "native" orientation. 
  
 // Then, from the ORIENTATIONS table, look up the angle the image must be 
  
 // rotated to compensate for the device's rotation. 
  
 int 
  
 deviceRotation 
  
 = 
  
 activity 
 . 
 getWindowManager 
 (). 
 getDefaultDisplay 
 (). 
 getRotation 
 (); 
  
 int 
  
 rotationCompensation 
  
 = 
  
 ORIENTATIONS 
 . 
 get 
 ( 
 deviceRotation 
 ); 
  
 // Get the device's sensor orientation. 
  
 CameraManager 
  
 cameraManager 
  
 = 
  
 ( 
 CameraManager 
 ) 
  
 activity 
 . 
 getSystemService 
 ( 
 CAMERA_SERVICE 
 ); 
  
 int 
  
 sensorOrientation 
  
 = 
  
 cameraManager 
  
 . 
 getCameraCharacteristics 
 ( 
 cameraId 
 ) 
  
 . 
 get 
 ( 
 CameraCharacteristics 
 . 
 SENSOR_ORIENTATION 
 ); 
  
 if 
  
 ( 
 isFrontFacing 
 ) 
  
 { 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 + 
  
 rotationCompensation 
 ) 
  
 % 
  
 360 
 ; 
  
 } 
  
 else 
  
 { 
  
 // back-facing 
  
 rotationCompensation 
  
 = 
  
 ( 
 sensorOrientation 
  
 - 
  
 rotationCompensation 
  
 + 
  
 360 
 ) 
  
 % 
  
 360 
 ; 
  
 } 
  
 return 
  
 rotationCompensation 
 ; 
 } 

Then, pass the media.Image object and the rotation degree value to InputImage.fromMediaImage() :

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 rotation 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromMediaImage 
 ( 
 mediaImage 
 , 
  
 rotation 
 ); 

Using a file URI

To create an InputImage object from a file URI, pass the app context and file URI to InputImage.fromFilePath() . This is useful when you use an ACTION_GET_CONTENT intent to prompt the user to select an image from their gallery app.

Kotlin

 val 
  
 image 
 : 
  
 InputImage 
 try 
  
 { 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromFilePath 
 ( 
 context 
 , 
  
 uri 
 ) 
 } 
  
 catch 
  
 ( 
 e 
 : 
  
 IOException 
 ) 
  
 { 
  
 e 
 . 
 printStackTrace 
 () 
 } 
  

Java

 InputImage 
  
 image 
 ; 
 try 
  
 { 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromFilePath 
 ( 
 context 
 , 
  
 uri 
 ); 
 } 
  
 catch 
  
 ( 
 IOException 
  
 e 
 ) 
  
 { 
  
 e 
 . 
 printStackTrace 
 (); 
 } 

Using a ByteBuffer or ByteArray

To create an InputImage object from a ByteBuffer or a ByteArray , first calculate the image rotation degree as previously described for media.Image input. Then, create the InputImage object with the buffer or array, together with image's height, width, color encoding format, and rotation degree:

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteBuffer 
 ( 
  
 byteBuffer 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ) 
  
 // Or: 
 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteArray 
 ( 
  
 byteArray 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteBuffer 
 ( 
 byteBuffer 
 , 
  
 /* image width */ 
  
 480 
 , 
  
 /* image height */ 
  
 360 
 , 
  
 rotationDegrees 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ); 
  
 // Or: 
 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromByteArray 
 ( 
  
 byteArray 
 , 
  
 /* image width */ 
 480 
 , 
  
 /* image height */ 
 360 
 , 
  
 rotation 
 , 
  
 InputImage 
 . 
 IMAGE_FORMAT_NV21 
  
 // or IMAGE_FORMAT_YV12 
 ); 
  

Using a Bitmap

To create an InputImage object from a Bitmap object, make the following declaration:

Kotlin

 val 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromBitmap 
 ( 
 bitmap 
 , 
  
 0 
 ) 
  

Java

 InputImage 
  
 image 
  
 = 
  
 InputImage 
 . 
 fromBitmap 
 ( 
 bitmap 
 , 
  
 rotationDegree 
 ); 
  

The image is represented by a Bitmap object together with rotation degrees.

4. Run the object detector

Kotlin

 objectDetector 
  
 . 
 process 
 ( 
 image 
 ) 
  
 . 
 addOnFailureListener 
 ( 
 e 
  
 -> 
  
 {...}) 
  
 . 
 addOnSuccessListener 
 ( 
 results 
  
 -> 
  
 { 
  
 for 
  
 ( 
 detectedObject 
  
 in 
  
 results 
 ) 
  
 { 
  
 // ... 
  
 } 
  
 }); 

Java

 objectDetector 
  
 . 
 process 
 ( 
 image 
 ) 
  
 . 
 addOnFailureListener 
 ( 
 e 
  
 -> 
  
 {...}) 
  
 . 
 addOnSuccessListener 
 ( 
 results 
  
 -> 
  
 { 
  
 for 
  
 ( 
 DetectedObject 
  
 detectedObject 
  
 : 
  
 results 
 ) 
  
 { 
  
 // ... 
  
 } 
  
 }); 

5. Get information about labeled objects

If the call to process() succeeds, a list of DetectedObject s is passed to the success listener.

Each DetectedObject contains the following properties:

Bounding box
A Rect that indicates the position of the object in the image.
Tracking ID
An integer that identifies the object across images. Null in SINGLE_IMAGE_MODE.
Labels
Label description The label's text description. Only returned if the TensorFlow Lite model's metadata contains label descriptions.
Label index The label's index among all the labels supported by the classifier.
Label confidence The confidence value of the object classification.

Kotlin

 // The list of detected objects contains one item if multiple 
 // object detection wasn't enabled. 
 for 
  
 ( 
 detectedObject 
  
 in 
  
 results 
 ) 
  
 { 
  
 val 
  
 boundingBox 
  
 = 
  
 detectedObject 
 . 
 boundingBox 
  
 val 
  
 trackingId 
  
 = 
  
 detectedObject 
 . 
 trackingId 
  
 for 
  
 ( 
 label 
  
 in 
  
 detectedObject 
 . 
 labels 
 ) 
  
 { 
  
 val 
  
 text 
  
 = 
  
 label 
 . 
 text 
  
 val 
  
 index 
  
 = 
  
 label 
 . 
 index 
  
 val 
  
 confidence 
  
 = 
  
 label 
 . 
 confidence 
  
 } 
 } 

Java

 // The list of detected objects contains one item if multiple 
 // object detection wasn't enabled. 
 for 
  
 ( 
 DetectedObject 
  
 detectedObject 
  
 : 
  
 results 
 ) 
  
 { 
  
 Rect 
  
 boundingBox 
  
 = 
  
 detectedObject 
 . 
 getBoundingBox 
 (); 
  
 Integer 
  
 trackingId 
  
 = 
  
 detectedObject 
 . 
 getTrackingId 
 (); 
  
 for 
  
 ( 
 Label 
  
 label 
  
 : 
  
 detectedObject 
 . 
 getLabels 
 ()) 
  
 { 
  
 String 
  
 text 
  
 = 
  
 label 
 . 
 getText 
 (); 
  
 int 
  
 index 
  
 = 
  
 label 
 . 
 getIndex 
 (); 
  
 float 
  
 confidence 
  
 = 
  
 label 
 . 
 getConfidence 
 (); 
  
 } 
 } 

Ensuring a great user experience

For the best user experience, follow these guidelines in your app:

  • Successful object detection depends on the object's visual complexity. In order to be detected, objects with a small number of visual features might need to take up a larger part of the image. You should provide users with guidance on capturing input that works well with the kind of objects you want to detect.
  • When you use classification, if you want to detect objects that don't fall cleanly into the supported categories, implement special handling for unknown objects.

Also, check out the ML Kit Material Design showcase app and the Material Design Patterns for machine learning-powered features collection.

Improving performance

If you want to use object detection in a real-time application, follow these guidelines to achieve the best framerates:
  • When you use streaming mode in a real-time application, don't use multiple object detection, as most devices won't be able to produce adequate framerates.

  • If you use the Camera or camera2 API, throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame. See the VisionProcessorBase class in the quickstart sample app for an example.
  • If you use the CameraX API, be sure that backpressure strategy is set to its default value ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST . This guarantees only one image will be delivered for analysis at a time. If more images are produced when the analyzer is busy, they will be dropped automatically and not queued for delivery. Once the image being analyzed is closed by calling ImageProxy.close(), the next latest image will be delivered.
  • If you use the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. This renders to the display surface only once for each input frame. See the CameraSourcePreview and GraphicOverlay classes in the quickstart sample app for an example.
  • If you use the Camera2 API, capture images in ImageFormat.YUV_420_888 format. If you use the older Camera API, capture images in ImageFormat.NV21 format.
Design a Mobile Site
View Site in Mobile | Classic
Share by: