Detect image properties

The Image Propertiesfeature detects general attributes of the image, such as dominant color.

Bali image
Image credit : Jeremy Bishop on Unsplash .

Dominant colors detected:

dominant colors detected in Bali image

Image property detection requests

Set up your Google Cloud project and authentication

Detect Image Properties in a local image

You can use the Vision API to perform feature detection on a local image file.

For REST requests, send the contents of the image file as a base64 encoded string in the body of your request.

For gcloud and client library requests, specify the path to a local image in your request.

The ColorInfo field does not carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space.

REST

Before using any of the request data, make the following replacements:

  • BASE64_ENCODED_IMAGE : The base64 representation (ASCII string) of your binary image data. This string should look similar to the following string:
    • /9j/4QAYRXhpZgAA...9tAVx/zDQDlGxn//2Q==
    Visit the base64 encode topic for more information.
  • RESULTS_INT : (Optional) An integer value of results to return. If you omit the "maxResults" field and its value, the API returns the default value of 10 results. This field does not apply to the following feature types: TEXT_DETECTION , DOCUMENT_TEXT_DETECTION , or CROP_HINTS .
  • PROJECT_ID : Your Google Cloud project ID.

HTTP method and URL:

POST https://vision.googleapis.com/v1/images:annotate

Request JSON body:

{
  "requests": [
    {
      "image": {
        "content": " BASE64_ENCODED_IMAGE 
"
      },
      "features": [
        {
          "maxResults": RESULTS_INT 
,
          "type": "IMAGE_PROPERTIES"
        },
      ]
    }
  ]
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json , and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID " \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/images:annotate"

PowerShell

Save the request body in a file named request.json , and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = " PROJECT_ID " }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

If the request is successful, the server returns a 200 OK HTTP status code and the response in JSON format.

Response:

Go

Before trying this sample, follow the Go setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Go API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  // detectProperties gets image properties from the Vision API for an image at the given file path. 
 func 
  
 detectProperties 
 ( 
 w 
  
 io 
 . 
 Writer 
 , 
  
 file 
  
 string 
 ) 
  
 error 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 vision 
 . 
 NewImageAnnotatorClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 f 
 , 
  
 err 
  
 := 
  
 os 
 . 
 Open 
 ( 
 file 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 defer 
  
 f 
 . 
 Close 
 () 
  
 image 
 , 
  
 err 
  
 := 
  
 vision 
 . 
 NewImageFromReader 
 ( 
 f 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 props 
 , 
  
 err 
  
 := 
  
 client 
 . 
 DetectImageProperties 
 ( 
 ctx 
 , 
  
 image 
 , 
  
 nil 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Dominant colors:" 
 ) 
  
 for 
  
 _ 
 , 
  
 quantized 
  
 := 
  
 range 
  
 props 
 . 
 DominantColors 
 . 
 Colors 
  
 { 
  
 color 
  
 := 
  
 quantized 
 . 
 Color 
  
 r 
  
 := 
  
 int 
 ( 
 color 
 . 
 Red 
 ) 
 & 
 0xff 
  
 g 
  
 := 
  
 int 
 ( 
 color 
 . 
 Green 
 ) 
 & 
 0xff 
  
 b 
  
 := 
  
 int 
 ( 
 color 
 . 
 Blue 
 ) 
 & 
 0xff 
  
 fmt 
 . 
 Fprintf 
 ( 
 w 
 , 
  
 "%2.1f%% - #%02x%02x%02x\n" 
 , 
  
 quantized 
 . 
 PixelFraction 
 * 
 100 
 , 
  
 r 
 , 
  
 g 
 , 
  
 b 
 ) 
  
 } 
  
 return 
  
 nil 
 } 
 

Java

Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Java reference documentation .

  import 
  
 com.google.cloud.vision.v1.AnnotateImageRequest 
 ; 
 import 
  
 com.google.cloud.vision.v1.AnnotateImageResponse 
 ; 
 import 
  
 com.google.cloud.vision.v1.BatchAnnotateImagesResponse 
 ; 
 import 
  
 com.google.cloud.vision.v1.ColorInfo 
 ; 
 import 
  
 com.google.cloud.vision.v1.DominantColorsAnnotation 
 ; 
 import 
  
 com.google.cloud.vision.v1.Feature 
 ; 
 import 
  
 com.google.cloud.vision.v1.Image 
 ; 
 import 
  
 com.google.cloud.vision.v1.ImageAnnotatorClient 
 ; 
 import 
  
 com.google.protobuf.ByteString 
 ; 
 import 
  
 java.io.FileInputStream 
 ; 
 import 
  
 java.io.IOException 
 ; 
 import 
  
 java.util.ArrayList 
 ; 
 import 
  
 java.util.List 
 ; 
 public 
  
 class 
 DetectProperties 
  
 { 
  
 public 
  
 static 
  
 void 
  
 detectProperties 
 () 
  
 throws 
  
 IOException 
  
 { 
  
 // TODO(developer): Replace these variables before running the sample. 
  
 String 
  
 filePath 
  
 = 
  
 "path/to/your/image/file.jpg" 
 ; 
  
 detectProperties 
 ( 
 filePath 
 ); 
  
 } 
  
 // Detects image properties such as color frequency from the specified local image. 
  
 public 
  
 static 
  
 void 
  
 detectProperties 
 ( 
 String 
  
 filePath 
 ) 
  
 throws 
  
 IOException 
  
 { 
  
 List<AnnotateImageRequest> 
  
 requests 
  
 = 
  
 new 
  
 ArrayList 
<> (); 
  
 ByteString 
  
 imgBytes 
  
 = 
  
 ByteString 
 . 
 readFrom 
 ( 
 new 
  
 FileInputStream 
 ( 
 filePath 
 )); 
  
 Image 
  
 img 
  
 = 
  
 Image 
 . 
 newBuilder 
 (). 
 setContent 
 ( 
 imgBytes 
 ). 
 build 
 (); 
  
 Feature 
  
 feat 
  
 = 
  
 Feature 
 . 
 newBuilder 
 (). 
 setType 
 ( 
 Feature 
 . 
 Type 
 . 
 IMAGE_PROPERTIES 
 ). 
 build 
 (); 
  
 AnnotateImageRequest 
  
 request 
  
 = 
  
 AnnotateImageRequest 
 . 
 newBuilder 
 (). 
 addFeatures 
 ( 
 feat 
 ). 
 setImage 
 ( 
 img 
 ). 
 build 
 (); 
  
 requests 
 . 
 add 
 ( 
 request 
 ); 
  
 // Initialize client that will be used to send requests. This client only needs to be created 
  
 // once, and can be reused for multiple requests. After completing all of your requests, call 
  
 // the "close" method on the client to safely clean up any remaining background resources. 
  
 try 
  
 ( 
 ImageAnnotatorClient 
  
 client 
  
 = 
  
 ImageAnnotatorClient 
 . 
 create 
 ()) 
  
 { 
  
 BatchAnnotateImagesResponse 
  
 response 
  
 = 
  
 client 
 . 
 batchAnnotateImages 
 ( 
 requests 
 ); 
  
 List<AnnotateImageResponse> 
  
 responses 
  
 = 
  
 response 
 . 
 getResponsesList 
 (); 
  
 for 
  
 ( 
 AnnotateImageResponse 
  
 res 
  
 : 
  
 responses 
 ) 
  
 { 
  
 if 
  
 ( 
 res 
 . 
 hasError 
 ()) 
  
 { 
  
 System 
 . 
 out 
 . 
 format 
 ( 
 "Error: %s%n" 
 , 
  
 res 
 . 
 getError 
 (). 
 getMessage 
 ()); 
  
 return 
 ; 
  
 } 
  
 // For full list of available annotations, see http://g.co/cloud/vision/docs 
  
 DominantColorsAnnotation 
  
 colors 
  
 = 
  
 res 
 . 
 getImagePropertiesAnnotation 
 (). 
 getDominantColors 
 (); 
  
 for 
  
 ( 
 ColorInfo 
  
 color 
  
 : 
  
 colors 
 . 
 getColorsList 
 ()) 
  
 { 
  
 System 
 . 
 out 
 . 
 format 
 ( 
  
 "fraction: %f%nr: %f, g: %f, b: %f%n" 
 , 
  
 color 
 . 
 getPixelFraction 
 (), 
  
 color 
 . 
 getColor 
 (). 
 getRed 
 (), 
  
 color 
 . 
 getColor 
 (). 
 getGreen 
 (), 
  
 color 
 . 
 getColor 
 (). 
 getBlue 
 ()); 
  
 } 
  
 } 
  
 } 
  
 } 
 } 
 

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Node.js API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  const 
  
 vision 
  
 = 
  
 require 
 ( 
 '@google-cloud/vision' 
 ); 
 // Creates a client 
 const 
  
 client 
  
 = 
  
 new 
  
 vision 
 . 
 ImageAnnotatorClient 
 (); 
 /** 
 * TODO(developer): Uncomment the following line before running the sample. 
 */ 
 // const fileName = 'Local image file, e.g. /path/to/image.png'; 
 // Performs property detection on the local file 
 const 
  
 [ 
 result 
 ] 
  
 = 
  
 await 
  
 client 
 . 
 imageProperties 
 ( 
 fileName 
 ); 
 const 
  
 colors 
  
 = 
  
 result 
 . 
 imagePropertiesAnnotation 
 . 
 dominantColors 
 . 
 colors 
 ; 
 colors 
 . 
 forEach 
 ( 
 color 
  
 = 
>  
 console 
 . 
 log 
 ( 
 color 
 )); 
 

Python

Before trying this sample, follow the Python setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Python API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  def 
  
 detect_properties 
 ( 
 path 
 ): 
  
 """Detects image properties in the file.""" 
 from 
  
 google.cloud 
  
 import 
 vision 
 client 
 = 
 vision 
 . 
 ImageAnnotatorClient 
 () 
 with 
 open 
 ( 
 path 
 , 
 "rb" 
 ) 
 as 
 image_file 
 : 
 content 
 = 
 image_file 
 . 
 read 
 () 
 image 
 = 
 vision 
 . 
 Image 
 ( 
 content 
 = 
 content 
 ) 
 response 
 = 
 client 
 . 
 image_properties 
 ( 
 image 
 = 
 image 
 ) 
 props 
 = 
 response 
 . 
 image_properties_annotation 
 print 
 ( 
 "Properties:" 
 ) 
 for 
 color 
 in 
 props 
 . 
 dominant_colors 
 . 
 colors 
 : 
 print 
 ( 
 f 
 "fraction: 
 { 
 color 
 . 
 pixel_fraction 
 } 
 " 
 ) 
 print 
 ( 
 f 
 " 
 \t 
 r: 
 { 
 color 
 . 
 color 
 . 
 red 
 } 
 " 
 ) 
 print 
 ( 
 f 
 " 
 \t 
 g: 
 { 
 color 
 . 
 color 
 . 
 green 
 } 
 " 
 ) 
 print 
 ( 
 f 
 " 
 \t 
 b: 
 { 
 color 
 . 
 color 
 . 
 blue 
 } 
 " 
 ) 
 print 
 ( 
 f 
 " 
 \t 
 a: 
 { 
 color 
 . 
 color 
 . 
 alpha 
 } 
 " 
 ) 
 if 
 response 
 . 
 error 
 . 
 message 
 : 
 raise 
 Exception 
 ( 
 " 
 {} 
 \n 
 For more info on error messages, check: " 
 "https://cloud.google.com/apis/design/errors" 
 . 
 format 
 ( 
 response 
 . 
 error 
 . 
 message 
 ) 
 ) 
 

Additional languages

C#: Please follow the C# setup instructions on the client libraries page and then visit the Vision reference documentation for .NET.

PHP: Please follow the PHP setup instructions on the client libraries page and then visit the Vision reference documentation for PHP.

Ruby: Please follow the Ruby setup instructions on the client libraries page and then visit the Vision reference documentation for Ruby.

Detect Image Properties in a remote image

You can use the Vision API to perform feature detection on a remote image file that is located in Cloud Storage or on the Web. To send a remote file request, specify the file's Web URL or Cloud Storage URI in the request body.

The ColorInfo field does not carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space.

REST

Before using any of the request data, make the following replacements:

  • CLOUD_STORAGE_IMAGE_URI : the path to a valid image file in a Cloud Storage bucket. You must at least have read privileges to the file. Example:
    • gs://cloud-samples-data/vision/image_properties/bali.jpeg
  • RESULTS_INT : (Optional) An integer value of results to return. If you omit the "maxResults" field and its value, the API returns the default value of 10 results. This field does not apply to the following feature types: TEXT_DETECTION , DOCUMENT_TEXT_DETECTION , or CROP_HINTS .
  • PROJECT_ID : Your Google Cloud project ID.

HTTP method and URL:

POST https://vision.googleapis.com/v1/images:annotate

Request JSON body:

{
  "requests": [
    {
      "image": {
        "source": {
          "gcsImageUri": " CLOUD_STORAGE_IMAGE_URI 
"
        }
      },
      "features": [
        {
          "maxResults": RESULTS_INT 
,
          "type": "IMAGE_PROPERTIES"
        },
      ]
    }
  ]
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json , and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID " \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/images:annotate"

PowerShell

Save the request body in a file named request.json , and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = " PROJECT_ID " }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

If the request is successful, the server returns a 200 OK HTTP status code and the response in JSON format.

Response:

Go

Before trying this sample, follow the Go setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Go API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  // detectProperties gets image properties from the Vision API for an image at the given file path. 
 func 
  
 detectPropertiesURI 
 ( 
 w 
  
 io 
 . 
 Writer 
 , 
  
 file 
  
 string 
 ) 
  
 error 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 vision 
 . 
 NewImageAnnotatorClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 image 
  
 := 
  
 vision 
 . 
 NewImageFromURI 
 ( 
 file 
 ) 
  
 props 
 , 
  
 err 
  
 := 
  
 client 
 . 
 DetectImageProperties 
 ( 
 ctx 
 , 
  
 image 
 , 
  
 nil 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Dominant colors:" 
 ) 
  
 for 
  
 _ 
 , 
  
 quantized 
  
 := 
  
 range 
  
 props 
 . 
 DominantColors 
 . 
 Colors 
  
 { 
  
 color 
  
 := 
  
 quantized 
 . 
 Color 
  
 r 
  
 := 
  
 int 
 ( 
 color 
 . 
 Red 
 ) 
 & 
 0xff 
  
 g 
  
 := 
  
 int 
 ( 
 color 
 . 
 Green 
 ) 
 & 
 0xff 
  
 b 
  
 := 
  
 int 
 ( 
 color 
 . 
 Blue 
 ) 
 & 
 0xff 
  
 fmt 
 . 
 Fprintf 
 ( 
 w 
 , 
  
 "%2.1f%% - #%02x%02x%02x\n" 
 , 
  
 quantized 
 . 
 PixelFraction 
 * 
 100 
 , 
  
 r 
 , 
  
 g 
 , 
  
 b 
 ) 
  
 } 
  
 return 
  
 nil 
 } 
 

Java

Before trying this sample, follow the Java setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Java API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  import 
  
 com.google.cloud.vision.v1.AnnotateImageRequest 
 ; 
 import 
  
 com.google.cloud.vision.v1.AnnotateImageResponse 
 ; 
 import 
  
 com.google.cloud.vision.v1.BatchAnnotateImagesResponse 
 ; 
 import 
  
 com.google.cloud.vision.v1.ColorInfo 
 ; 
 import 
  
 com.google.cloud.vision.v1.DominantColorsAnnotation 
 ; 
 import 
  
 com.google.cloud.vision.v1.Feature 
 ; 
 import 
  
 com.google.cloud.vision.v1.Image 
 ; 
 import 
  
 com.google.cloud.vision.v1.ImageAnnotatorClient 
 ; 
 import 
  
 com.google.cloud.vision.v1.ImageSource 
 ; 
 import 
  
 java.io.IOException 
 ; 
 import 
  
 java.util.ArrayList 
 ; 
 import 
  
 java.util.List 
 ; 
 public 
  
 class 
 DetectPropertiesGcs 
  
 { 
  
 public 
  
 static 
  
 void 
  
 detectPropertiesGcs 
 () 
  
 throws 
  
 IOException 
  
 { 
  
 // TODO(developer): Replace these variables before running the sample. 
  
 String 
  
 filePath 
  
 = 
  
 "gs://your-gcs-bucket/path/to/image/file.jpg" 
 ; 
  
 detectPropertiesGcs 
 ( 
 filePath 
 ); 
  
 } 
  
 // Detects image properties such as color frequency from the specified remote image on Google 
  
 // Cloud Storage. 
  
 public 
  
 static 
  
 void 
  
 detectPropertiesGcs 
 ( 
 String 
  
 gcsPath 
 ) 
  
 throws 
  
 IOException 
  
 { 
  
 List<AnnotateImageRequest> 
  
 requests 
  
 = 
  
 new 
  
 ArrayList 
<> (); 
  
 ImageSource 
  
 imgSource 
  
 = 
  
 ImageSource 
 . 
 newBuilder 
 (). 
 setGcsImageUri 
 ( 
 gcsPath 
 ). 
 build 
 (); 
  
 Image 
  
 img 
  
 = 
  
 Image 
 . 
 newBuilder 
 (). 
 setSource 
 ( 
 imgSource 
 ). 
 build 
 (); 
  
 Feature 
  
 feat 
  
 = 
  
 Feature 
 . 
 newBuilder 
 (). 
 setType 
 ( 
 Feature 
 . 
 Type 
 . 
 IMAGE_PROPERTIES 
 ). 
 build 
 (); 
  
 AnnotateImageRequest 
  
 request 
  
 = 
  
 AnnotateImageRequest 
 . 
 newBuilder 
 (). 
 addFeatures 
 ( 
 feat 
 ). 
 setImage 
 ( 
 img 
 ). 
 build 
 (); 
  
 requests 
 . 
 add 
 ( 
 request 
 ); 
  
 // Initialize client that will be used to send requests. This client only needs to be created 
  
 // once, and can be reused for multiple requests. After completing all of your requests, call 
  
 // the "close" method on the client to safely clean up any remaining background resources. 
  
 try 
  
 ( 
 ImageAnnotatorClient 
  
 client 
  
 = 
  
 ImageAnnotatorClient 
 . 
 create 
 ()) 
  
 { 
  
 BatchAnnotateImagesResponse 
  
 response 
  
 = 
  
 client 
 . 
 batchAnnotateImages 
 ( 
 requests 
 ); 
  
 List<AnnotateImageResponse> 
  
 responses 
  
 = 
  
 response 
 . 
 getResponsesList 
 (); 
  
 for 
  
 ( 
 AnnotateImageResponse 
  
 res 
  
 : 
  
 responses 
 ) 
  
 { 
  
 if 
  
 ( 
 res 
 . 
 hasError 
 ()) 
  
 { 
  
 System 
 . 
 out 
 . 
 format 
 ( 
 "Error: %s%n" 
 , 
  
 res 
 . 
 getError 
 (). 
 getMessage 
 ()); 
  
 return 
 ; 
  
 } 
  
 // For full list of available annotations, see http://g.co/cloud/vision/docs 
  
 DominantColorsAnnotation 
  
 colors 
  
 = 
  
 res 
 . 
 getImagePropertiesAnnotation 
 (). 
 getDominantColors 
 (); 
  
 for 
  
 ( 
 ColorInfo 
  
 color 
  
 : 
  
 colors 
 . 
 getColorsList 
 ()) 
  
 { 
  
 System 
 . 
 out 
 . 
 format 
 ( 
  
 "fraction: %f%nr: %f, g: %f, b: %f%n" 
 , 
  
 color 
 . 
 getPixelFraction 
 (), 
  
 color 
 . 
 getColor 
 (). 
 getRed 
 (), 
  
 color 
 . 
 getColor 
 (). 
 getGreen 
 (), 
  
 color 
 . 
 getColor 
 (). 
 getBlue 
 ()); 
  
 } 
  
 } 
  
 } 
  
 } 
 } 
 

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Node.js API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  // Imports the Google Cloud client libraries 
 const 
  
 vision 
  
 = 
  
 require 
 ( 
 '@google-cloud/vision' 
 ); 
 // Creates a client 
 const 
  
 client 
  
 = 
  
 new 
  
 vision 
 . 
 ImageAnnotatorClient 
 (); 
 /** 
 * TODO(developer): Uncomment the following lines before running the sample. 
 */ 
 // const bucketName = 'Bucket where the file resides, e.g. my-bucket'; 
 // const fileName = 'Path to file within bucket, e.g. path/to/image.png'; 
 // Performs property detection on the gcs file 
 const 
  
 [ 
 result 
 ] 
  
 = 
  
 await 
  
 client 
 . 
 imageProperties 
 ( 
  
 `gs:// 
 ${ 
 bucketName 
 } 
 / 
 ${ 
 fileName 
 } 
 ` 
 ); 
 const 
  
 colors 
  
 = 
  
 result 
 . 
 imagePropertiesAnnotation 
 . 
 dominantColors 
 . 
 colors 
 ; 
 colors 
 . 
 forEach 
 ( 
 color 
  
 = 
>  
 console 
 . 
 log 
 ( 
 color 
 )); 
 

Python

Before trying this sample, follow the Python setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Python API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  def 
  
 detect_properties_uri 
 ( 
 uri 
 ): 
  
 """Detects image properties in the file located in Google Cloud Storage or 
 on the Web.""" 
 from 
  
 google.cloud 
  
 import 
 vision 
 client 
 = 
 vision 
 . 
 ImageAnnotatorClient 
 () 
 image 
 = 
 vision 
 . 
 Image 
 () 
 image 
 . 
 source 
 . 
 image_uri 
 = 
 uri 
 response 
 = 
 client 
 . 
 image_properties 
 ( 
 image 
 = 
 image 
 ) 
 props 
 = 
 response 
 . 
 image_properties_annotation 
 print 
 ( 
 "Properties:" 
 ) 
 for 
 color 
 in 
 props 
 . 
 dominant_colors 
 . 
 colors 
 : 
 print 
 ( 
 f 
 "frac: 
 { 
 color 
 . 
 pixel_fraction 
 } 
 " 
 ) 
 print 
 ( 
 f 
 " 
 \t 
 r: 
 { 
 color 
 . 
 color 
 . 
 red 
 } 
 " 
 ) 
 print 
 ( 
 f 
 " 
 \t 
 g: 
 { 
 color 
 . 
 color 
 . 
 green 
 } 
 " 
 ) 
 print 
 ( 
 f 
 " 
 \t 
 b: 
 { 
 color 
 . 
 color 
 . 
 blue 
 } 
 " 
 ) 
 print 
 ( 
 f 
 " 
 \t 
 a: 
 { 
 color 
 . 
 color 
 . 
 alpha 
 } 
 " 
 ) 
 if 
 response 
 . 
 error 
 . 
 message 
 : 
 raise 
 Exception 
 ( 
 " 
 {} 
 \n 
 For more info on error messages, check: " 
 "https://cloud.google.com/apis/design/errors" 
 . 
 format 
 ( 
 response 
 . 
 error 
 . 
 message 
 ) 
 ) 
 

gcloud

To perform image property detection, use the gcloud ml vision detect-image-properties command as shown in the following example:

gcloud ml vision detect-image-properties gs://cloud-samples-data/vision/image_properties/bali.jpeg 

Additional languages

C#: Please follow the C# setup instructions on the client libraries page and then visit the Vision reference documentation for .NET.

PHP: Please follow the PHP setup instructions on the client libraries page and then visit the Vision reference documentation for PHP.

Ruby: Please follow the Ruby setup instructions on the client libraries page and then visit the Vision reference documentation for Ruby.

Try it

Try image property detection below. You can use the image specified already ( gs://cloud-samples-data/vision/image_properties/bali.jpeg ) or specify your own image in its place. Send the request by selecting Execute.

Bali image
Image credit : Jeremy Bishop on Unsplash .

Request body:

{
  "requests": [
    {
      "features": [
        {
          "maxResults": 10,
          "type": "IMAGE_PROPERTIES"
        }
      ],
      "image": {
        "source": {
          "imageUri": "gs://cloud-samples-data/vision/image_properties/bali.jpeg"
        }
      }
    }
  ]
}
Create a Mobile Website
View Site in Mobile | Classic
Share by: