Label Images Securely with Cloud Vision using Firebase Auth and Functions on Apple platforms
Stay organized with collectionsSave and categorize content based on your preferences.
In order to call a Google Cloud API from your app, you need to create an intermediate
REST API that handles authorization and protects secret values such as API keys. You then need to
write code in your mobile app to authenticate to and communicate with this intermediate service.
One way to create this REST API is by using Firebase Authentication and Functions, which gives you a managed, serverless gateway to
Google Cloud APIs that handles authentication and can be called from your mobile app with
pre-built SDKs.
This guide demonstrates how to use this technique to call the Cloud Vision API from your app.
This method will allow all authenticated users to access Cloud Vision billed services through your Cloud project, so
consider whether this auth mechanism is sufficient for your use case before proceeding.
Before you begin
Configure your project
If you have not already added Firebase to your app, do so by following the
steps in thegetting started guide.
Use Swift Package Manager to install and manage Firebase dependencies.
In Xcode, with your app project open, navigate toFile > Add Packages.
When prompted, add the Firebase Apple platforms SDK repository:
https://github.com/firebase/firebase-ios-sdk.git
Choose theFirebase MLlibrary.
Add the-ObjCflag to theOther Linker Flagssection of your target's build settings.
When finished, Xcode will automatically begin resolving and downloading your
dependencies in the background.
If you haven't already upgraded your project to thepay-as-you-go Blaze pricing plan, clickUpgradeto do so. (You'll be
prompted to upgrade only if your project isn't on the
Blaze pricing plan.)
Only projects on the Blaze pricing plan can use
Cloud-based APIs.
If Cloud-based APIs aren't already enabled, clickEnable Cloud-based APIs.
Configure your existing Firebase API keys to disallow access to the Cloud
Vision API:
For each API key in the list, open the editing view, and in the Key
Restrictions section, add all of the available APIsexceptthe Cloud Vision
API to the list.
Deploy the callable function
Next, deploy the Cloud Function you will use to bridge your app and the Cloud
Vision API. Thefunctions-samplesrepository contains an example
you can use.
By default, accessing the Cloud Vision API through this function will allow
only authenticated users of your app access to the Cloud Vision API. You can
modify the function for different requirements.
To deploy the function:
Clone or download thefunctions-samples repoand change to theNode-1st-gen/vision-annotate-imagedirectory:
Initialize a Firebase project in thevision-annotate-imagedirectory. When prompted, select your project in the list.
firebase init
Deploy the function:
firebase deploy --only functions:annotateImage
Add Firebase Auth to your app
The callable function deployed above will reject any request from non-authenticated
users of your app. If you have not already done so, you will need toadd Firebase
Auth to your app.
Add necessary dependencies to your app
Use Swift Package Manager to install the Cloud Functions for Firebase library.
Now you are ready to label images.
1. Prepare the input image
In order to call Cloud Vision, the image must be formatted as a base64-encoded
string. To process aUIImage:
[[_functionsHTTPSCallableWithName:@"annotateImage"]callWithObject:requestDatacompletion:^(FIRHTTPSCallableResult*_Nullableresult,NSError*_Nullableerror){if(error){if([error.domainisEqualToString:@"com.firebase.functions"]){FIRFunctionsErrorCodecode=error.code;NSString*message=error.localizedDescription;NSObject*details=error.userInfo[@"details"];}// ...}// Function completed succesfully// Get information about labeled objects}];
3. Get information about labeled objects
If the image labeling operation succeeds, a JSON response ofBatchAnnotateImagesResponsewill be returned in the task's result. Each object in thelabelAnnotationsarray represents something that was labeled in the image. For each label, you
can get the label's text description, itsKnowledge Graph entity ID(if available), and the confidence score of the match. For example:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["| The Firebase ML Vision SDK for labeling objects in an image is\n| now deprecated\n| [(See the\n| outdated docs here).](/docs/ml/ios/label-images-deprecated)\n| This page describes how, as an alternative to the deprecated SDK, you can\n| call Cloud Vision APIs using Firebase Auth and Firebase Functions to allow\n| only authenticated users to access the API.\n\n\nIn order to call a Google Cloud API from your app, you need to create an intermediate\nREST API that handles authorization and protects secret values such as API keys. You then need to\nwrite code in your mobile app to authenticate to and communicate with this intermediate service.\n\n\nOne way to create this REST API is by using Firebase Authentication and Functions, which gives you a managed, serverless gateway to\nGoogle Cloud APIs that handles authentication and can be called from your mobile app with\npre-built SDKs.\n\n\nThis guide demonstrates how to use this technique to call the Cloud Vision API from your app.\nThis method will allow all authenticated users to access Cloud Vision billed services through your Cloud project, so\nconsider whether this auth mechanism is sufficient for your use case before proceeding.\n| Use of the Cloud Vision APIs is subject to the [Google Cloud Platform License\n| Agreement](https://cloud.google.com/terms/) and [Service\n| Specific Terms](https://cloud.google.com/terms/service-terms), and billed accordingly. For billing information, see the [Pricing](https://cloud.google.com/vision/pricing) page.\n| **Looking for on-device image labeling?** Try the [standalone ML Kit library](https://developers.google.com/ml-kit/vision/image-labeling).\n\n\u003cbr /\u003e\n\nBefore you begin\n\n\u003cbr /\u003e\n\nConfigure your project If you have not already added Firebase to your app, do so by following the steps in the [getting started guide](/docs/ios/setup).\n\nUse Swift Package Manager to install and manage Firebase dependencies.\n| Visit [our installation guide](/docs/ios/installation-methods) to learn about the different ways you can add Firebase SDKs to your Apple project, including importing frameworks directly and using CocoaPods.\n\n1. In Xcode, with your app project open, navigate to **File \\\u003e Add Packages**.\n2. When prompted, add the Firebase Apple platforms SDK repository: \n\n```text\n https://github.com/firebase/firebase-ios-sdk.git\n```\n| **Note:** New projects should use the default (latest) SDK version, but you can choose an older version if needed.\n3. Choose the Firebase ML library.\n4. Add the `-ObjC` flag to the *Other Linker Flags* section of your target's build settings.\n5. When finished, Xcode will automatically begin resolving and downloading your dependencies in the background.\n\n\nNext, perform some in-app setup:\n\n1. In your app, import Firebase:\n\n Swift \n\n ```swift\n import FirebaseMLModelDownloader\n ```\n\n Objective-C \n\n ```objective-c\n @import FirebaseMLModelDownloader;\n ```\n\n\nA few more configuration steps, and we're ready to go:\n\n1. If you haven't already enabled Cloud-based APIs for your project, do so\n now:\n\n 1. Open the [Firebase ML\n APIs page](//console.firebase.google.com/project/_/ml/apis) in the Firebase console.\n 2. If you haven't already upgraded your project to the\n [pay-as-you-go Blaze pricing plan](/pricing), click **Upgrade** to do so. (You'll be\n prompted to upgrade only if your project isn't on the\n Blaze pricing plan.)\n\n Only projects on the Blaze pricing plan can use\n Cloud-based APIs.\n 3. If Cloud-based APIs aren't already enabled, click **Enable Cloud-based APIs**.\n2. Configure your existing Firebase API keys to disallow access to the Cloud Vision API:\n 1. Open the [Credentials](https://console.cloud.google.com/apis/credentials?project=_) page of the Cloud console.\n 2. For each API key in the list, open the editing view, and in the Key Restrictions section, add all of the available APIs *except* the Cloud Vision API to the list.\n\nDeploy the callable function\n\nNext, deploy the Cloud Function you will use to bridge your app and the Cloud\nVision API. The `functions-samples` repository contains an example\nyou can use.\n\nBy default, accessing the Cloud Vision API through this function will allow\nonly authenticated users of your app access to the Cloud Vision API. You can\nmodify the function for different requirements.\n\nTo deploy the function:\n\n1. Clone or download the [functions-samples repo](https://github.com/firebase/functions-samples) and change to the `Node-1st-gen/vision-annotate-image` directory: \n\n git clone https://github.com/firebase/functions-samples\n cd Node-1st-gen/vision-annotate-image\n\n2. Install dependencies: \n\n cd functions\n npm install\n cd ..\n\n3. If you don't have the Firebase CLI, [install it](/docs/cli#setup_update_cli).\n4. Initialize a Firebase project in the `vision-annotate-image` directory. When prompted, select your project in the list. \n\n ```\n firebase init\n ```\n5. Deploy the function: \n\n ```\n firebase deploy --only functions:annotateImage\n ```\n\nAdd Firebase Auth to your app\n\nThe callable function deployed above will reject any request from non-authenticated\nusers of your app. If you have not already done so, you will need to [add Firebase\nAuth to your app.](https://firebase.google.com/docs/auth/ios/start#add_to_your_app)\n\nAdd necessary dependencies to your app\n\n\nUse Swift Package Manager to install the Cloud Functions for Firebase library.\n\nNow you are ready to label images.\n\n1. Prepare the input image In order to call Cloud Vision, the image must be formatted as a base64-encoded string. To process a `UIImage`: \n\nSwift \n\n```swift\nguard let imageData = uiImage.jpegData(compressionQuality: 1.0) else { return }\nlet base64encodedImage = imageData.base64EncodedString()\n```\n\nObjective-C \n\n```objective-c\nNSData *imageData = UIImageJPEGRepresentation(uiImage, 1.0f);\nNSString *base64encodedImage =\n [imageData base64EncodedStringWithOptions:NSDataBase64Encoding76CharacterLineLength];\n```\n\n2. Invoke the callable function to label the image To label objects in an image, invoke the callable function passing a [JSON Cloud Vision request](https://cloud.google.com/vision/docs/request#json_request_format).\n\n\u003cbr /\u003e\n\n1. First, initialize an instance of Cloud Functions:\n\n Swift \n\n lazy var functions = Functions.functions()\n\n Objective-C \n\n @property(strong, nonatomic) FIRFunctions *functions;\n\n2. Create a request with [Type](https://cloud.google.com/vision/docs/reference/rest/v1/Feature#type) set to `LABEL_DETECTION`:\n\n Swift \n\n let requestData = [\n \"image\": [\"content\": base64encodedImage],\n \"features\": [\"maxResults\": 5, \"type\": \"LABEL_DETECTION\"]\n ]\n\n Objective-C \n\n NSDictionary *requestData = @{\n @\"image\": @{@\"content\": base64encodedImage},\n @\"features\": @{@\"maxResults\": @5, @\"type\": @\"LABEL_DETECTION\"}\n };\n\n3. Finally, invoke the function:\n\n Swift \n\n do {\n let result = try await functions.httpsCallable(\"annotateImage\").call(requestData)\n print(result)\n } catch {\n if let error = error as NSError? {\n if error.domain == FunctionsErrorDomain {\n let code = FunctionsErrorCode(rawValue: error.code)\n let message = error.localizedDescription\n let details = error.userInfo[FunctionsErrorDetailsKey]\n }\n // ...\n }\n }\n\n Objective-C \n\n [[_functions HTTPSCallableWithName:@\"annotateImage\"]\n callWithObject:requestData\n completion:^(FIRHTTPSCallableResult * _Nullable result, NSError * _Nullable error) {\n if (error) {\n if ([error.domain isEqualToString:@\"com.firebase.functions\"]) {\n FIRFunctionsErrorCode code = error.code;\n NSString *message = error.localizedDescription;\n NSObject *details = error.userInfo[@\"details\"];\n }\n // ...\n }\n // Function completed succesfully\n // Get information about labeled objects\n\n }];\n\n3. Get information about labeled objects If the image labeling operation succeeds, a JSON response of [BatchAnnotateImagesResponse](https://cloud.google.com/vision/docs/reference/rest/v1/BatchAnnotateImagesResponse) will be returned in the task's result. Each object in the `labelAnnotations` array represents something that was labeled in the image. For each label, you can get the label's text description, its [Knowledge Graph entity ID](https://developers.google.com/knowledge-graph/) (if available), and the confidence score of the match. For example:\n\n\u003cbr /\u003e\n\nSwift \n\n if let labelArray = (result?.data as? [String: Any])?[\"labelAnnotations\"] as? [[String:Any]] {\n for labelObj in labelArray {\n let text = labelObj[\"description\"]\n let entityId = labelObj[\"mid\"]\n let confidence = labelObj[\"score\"]\n }\n }\n\nObjective-C \n\n NSArray *labelArray = result.data[@\"labelAnnotations\"];\n for (NSDictionary *labelObj in labelArray) {\n NSString *text = labelObj[@\"description\"];\n NSString *entityId = labelObj[@\"mid\"];\n NSNumber *confidence = labelObj[@\"score\"];\n }"]]