You can use Firebase ML to label objects recognized in an image. See the overview for information about this API's features.
Before you begin
- If you have not already added Firebase to your app, do so by following the
steps in the getting started guide 
. 
- In Xcode, with your app project open, navigate to File > Add Packages .
- When prompted, add the Firebase Apple platforms SDK repository:
- Choose the Firebase ML library.
- Add the -ObjCflag to the Other Linker Flags section of your target's build settings.
- When finished, Xcode will automatically begin resolving and downloading your dependencies in the background.
- In your app, import Firebase: Swiftimport FirebaseMLModelDownloader Objective-C@import FirebaseMLModelDownloader ; 
-  If you haven't already enabled Cloud-based APIs for your project, do so now: - Open the Firebase ML APIs page in the Firebase console.
-  If you haven't already upgraded your project to the pay-as-you-go Blaze pricing plan , click Upgrade to do so. (You'll be prompted to upgrade only if your project isn't on the Blaze pricing plan.) Only projects on the Blaze pricing plan can use Cloud-based APIs. 
- If Cloud-based APIs aren't already enabled, click Enable Cloud-based APIs .
 
Use Swift Package Manager to install and manage Firebase dependencies.
https://github.com/firebase/firebase-ios-sdk.git
Next, perform some in-app setup:
Now you are ready to label images.
1. Prepare the input image
Create a  VisionImage 
 
object using a UIImage 
or a CMSampleBufferRef 
.
To use a UIImage 
:
- If necessary, rotate the image so that its imageOrientationproperty is.up.
- Create a VisionImageobject using the correctly-rotatedUIImage. Do not specify any rotation metadata—the default value,.topLeft, must be used.Swiftlet image = VisionImage ( image : uiImage ) Objective-CFIRVisionImage * image = [[ FIRVisionImage alloc ] initWithImage : uiImage ]; 
To use a CMSampleBufferRef 
:
-  Create a VisionImageMetadataobject that specifies the orientation of the image data contained in theCMSampleBufferRefbuffer.To get the image orientation: Swiftfunc imageOrientation ( deviceOrientation : UIDeviceOrientation , cameraPosition : AVCaptureDevice . Position ) - > VisionDetectorImageOrientation { switch deviceOrientation { case . portrait : return cameraPosition == . front ? . leftTop : . rightTop case . landscapeLeft : return cameraPosition == . front ? . bottomLeft : . topLeft case . portraitUpsideDown : return cameraPosition == . front ? . rightBottom : . leftBottom case . landscapeRight : return cameraPosition == . front ? . topRight : . bottomRight case . faceDown , . faceUp , . unknown : return . leftTop } } Objective-C- ( FIRVisionDetectorImageOrientation ) imageOrientationFromDeviceOrientation: ( UIDeviceOrientation ) deviceOrientation cameraPosition :( AVCaptureDevicePosition ) cameraPosition { switch ( deviceOrientation ) { case UIDeviceOrientationPortrait : if ( cameraPosition == AVCaptureDevicePositionFront ) { return FIRVisionDetectorImageOrientationLeftTop ; } else { return FIRVisionDetectorImageOrientationRightTop ; } case UIDeviceOrientationLandscapeLeft : if ( cameraPosition == AVCaptureDevicePositionFront ) { return FIRVisionDetectorImageOrientationBottomLeft ; } else { return FIRVisionDetectorImageOrientationTopLeft ; } case UIDeviceOrientationPortraitUpsideDown : if ( cameraPosition == AVCaptureDevicePositionFront ) { return FIRVisionDetectorImageOrientationRightBottom ; } else { return FIRVisionDetectorImageOrientationLeftBottom ; } case UIDeviceOrientationLandscapeRight : if ( cameraPosition == AVCaptureDevicePositionFront ) { return FIRVisionDetectorImageOrientationTopRight ; } else { return FIRVisionDetectorImageOrientationBottomRight ; } default : return FIRVisionDetectorImageOrientationTopLeft ; } } Then, create the metadata object: Swiftlet cameraPosition = AVCaptureDevice . Position . back // Set to the capture device you used. let metadata = VisionImageMetadata () metadata . orientation = imageOrientation ( deviceOrientation : UIDevice . current . orientation , cameraPosition : cameraPosition ) Objective-CFIRVisionImageMetadata * metadata = [[ FIRVisionImageMetadata alloc ] init ]; AVCaptureDevicePosition cameraPosition = AVCaptureDevicePositionBack ; // Set to the capture device you used. metadata . orientation = [ self imageOrientationFromDeviceOrientation : UIDevice . currentDevice . orientation cameraPosition : cameraPosition ]; 
- Create a VisionImageobject using theCMSampleBufferRefobject and the rotation metadata:Swiftlet image = VisionImage ( buffer : sampleBuffer ) image . metadata = metadata Objective-CFIRVisionImage * image = [[ FIRVisionImage alloc ] initWithBuffer : sampleBuffer ]; image . metadata = metadata ; 
2. Configure and run the image labeler
To label objects in an image, pass theVisionImage 
object to the VisionImageLabeler 
's processImage() 
method. -  First, get an instance of VisionImageLabeler:Swiftlet labeler = Vision . vision (). cloudImageLabeler () // Or, to set the minimum confidence required: // let options = VisionCloudImageLabelerOptions() // options.confidenceThreshold = 0.7 // let labeler = Vision.vision().cloudImageLabeler(options: options)Objective-CFIRVisionImageLabeler * labeler = [[ FIRVision vision ] cloudImageLabeler ]; // Or, to set the minimum confidence required: // FIRVisionCloudImageLabelerOptions *options = // [[FIRVisionCloudImageLabelerOptions alloc] init]; // options.confidenceThreshold = 0.7; // FIRVisionImageLabeler *labeler = // [[FIRVision vision] cloudImageLabelerWithOptions:options];
-  Then, pass the image to the processImage()method:Swiftlabeler . process ( image ) { labels , error in guard error == nil , let labels = labels else { return } // Task succeeded. // ... }Objective-C[ labeler processImage : image completion : ^ ( NSArray<FIRVisionImageLabel * > * _Nullable labels , NSError * _Nullable error ) { if ( error != nil ) { return ; } // Task succeeded. // ... }];
3. Get information about labeled objects
If image labeling succeeds, an array ofVisionImageLabel 
objects will be passed to the completion handler. From each object, you can get
information about a feature recognized in the image. For example:
Swift
  for 
  
 label 
  
 in 
  
 labels 
  
 { 
  
 let 
  
 labelText 
  
 = 
  
 label 
 . 
 text 
  
 let 
  
 entityId 
  
 = 
  
 label 
 . 
 entityID 
  
 let 
  
 confidence 
  
 = 
  
 label 
 . 
 confidence 
 } 
 
 
Objective-C
  for 
  
 ( 
 FIRVisionImageLabel 
  
 * 
 label 
  
 in 
  
 labels 
 ) 
  
 { 
  
 NSString 
  
 * 
 labelText 
  
 = 
  
 label 
 . 
 text 
 ; 
  
 NSString 
  
 * 
 entityId 
  
 = 
  
 label 
 . 
 entityID 
 ; 
  
 NSNumber 
  
 * 
 confidence 
  
 = 
  
 label 
 . 
 confidence 
 ; 
 } 
 
 
Next steps
- Before you deploy to production an app that uses a Cloud API, you should take some additional steps to prevent and mitigate the effect of unauthorized API access .

