Detect explicit content (SafeSearch)

SafeSearch Detectiondetects explicit content such as adult content or violent content within an image. This feature uses five categories ( adult , spoof , medical , violence , and racy ) and returns the likelihood that each is present in a given image. See the SafeSearchAnnotation page for details on these fields.

SafeSearch detection requests

Set up your Google Cloud project and authentication

Explicit content detection on a local image

You can use the Vision API to perform feature detection on a local image file.

For REST requests, send the contents of the image file as a base64 encoded string in the body of your request.

For gcloud and client library requests, specify the path to a local image in your request.

REST

Before using any of the request data, make the following replacements:

  • BASE64_ENCODED_IMAGE : The base64 representation (ASCII string) of your binary image data. This string should look similar to the following string:
    • /9j/4QAYRXhpZgAA...9tAVx/zDQDlGxn//2Q==
    Visit the base64 encode topic for more information.
  • PROJECT_ID : Your Google Cloud project ID.

HTTP method and URL:

POST https://vision.googleapis.com/v1/images:annotate

Request JSON body:

{
  "requests": [
    {
      "image": {
        "content": " BASE64_ENCODED_IMAGE 
"
      },
      "features": [
        {
          "type": "SAFE_SEARCH_DETECTION"
        },
      ]
    }
  ]
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json , and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID " \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/images:annotate"

PowerShell

Save the request body in a file named request.json , and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = " PROJECT_ID " }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{
  "responses": [
    {
      "safeSearchAnnotation": {
        "adult": "UNLIKELY",
        "spoof": "VERY_UNLIKELY",
        "medical": "VERY_UNLIKELY",
        "violence": " LIKELY 
",
        "racy": " POSSIBLE 
"
      }
    }
  ]
}

Go

Before trying this sample, follow the Go setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Go API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  // detectSafeSearch gets image properties from the Vision API for an image at the given file path. 
 func 
  
 detectSafeSearch 
 ( 
 w 
  
 io 
 . 
 Writer 
 , 
  
 file 
  
 string 
 ) 
  
 error 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 vision 
 . 
 NewImageAnnotatorClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 f 
 , 
  
 err 
  
 := 
  
 os 
 . 
 Open 
 ( 
 file 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 defer 
  
 f 
 . 
 Close 
 () 
  
 image 
 , 
  
 err 
  
 := 
  
 vision 
 . 
 NewImageFromReader 
 ( 
 f 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 props 
 , 
  
 err 
  
 := 
  
 client 
 . 
 DetectSafeSearch 
 ( 
 ctx 
 , 
  
 image 
 , 
  
 nil 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Safe Search properties:" 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Adult:" 
 , 
  
 props 
 . 
 Adult 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Medical:" 
 , 
  
 props 
 . 
 Medical 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Racy:" 
 , 
  
 props 
 . 
 Racy 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Spoofed:" 
 , 
  
 props 
 . 
 Spoof 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Violence:" 
 , 
  
 props 
 . 
 Violence 
 ) 
  
 return 
  
 nil 
 } 
 

Java

Before trying this sample, follow the Java setup instructions in the Vision API Quickstart Using Client Libraries . For more information, see the Vision API Java reference documentation .

  import 
  
 com.google.cloud.vision.v1. AnnotateImageRequest 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. AnnotateImageResponse 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. BatchAnnotateImagesResponse 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. Feature 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. Image 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. ImageAnnotatorClient 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. SafeSearchAnnotation 
 
 ; 
 import 
  
 com.google.protobuf. ByteString 
 
 ; 
 import 
  
 java.io.FileInputStream 
 ; 
 import 
  
 java.io.IOException 
 ; 
 import 
  
 java.util.ArrayList 
 ; 
 import 
  
 java.util.List 
 ; 
 public 
  
 class 
 DetectSafeSearch 
  
 { 
  
 public 
  
 static 
  
 void 
  
 detectSafeSearch 
 () 
  
 throws 
  
 IOException 
  
 { 
  
 // TODO(developer): Replace these variables before running the sample. 
  
 String 
  
 filePath 
  
 = 
  
 "path/to/your/image/file.jpg" 
 ; 
  
 detectSafeSearch 
 ( 
 filePath 
 ); 
  
 } 
  
 // Detects whether the specified image has features you would want to moderate. 
  
 public 
  
 static 
  
 void 
  
 detectSafeSearch 
 ( 
 String 
  
 filePath 
 ) 
  
 throws 
  
 IOException 
  
 { 
  
 List<AnnotateImageRequest> 
  
 requests 
  
 = 
  
 new 
  
 ArrayList 
<> (); 
  
  ByteString 
 
  
 imgBytes 
  
 = 
  
  ByteString 
 
 . 
  readFrom 
 
 ( 
 new 
  
 FileInputStream 
 ( 
 filePath 
 )); 
  
  Image 
 
  
 img 
  
 = 
  
  Image 
 
 . 
 newBuilder 
 (). 
 setContent 
 ( 
 imgBytes 
 ). 
 build 
 (); 
  
  Feature 
 
  
 feat 
  
 = 
  
  Feature 
 
 . 
 newBuilder 
 (). 
 setType 
 ( 
  Feature 
 
 . 
 Type 
 . 
 SAFE_SEARCH_DETECTION 
 ). 
 build 
 (); 
  
  AnnotateImageRequest 
 
  
 request 
  
 = 
  
  AnnotateImageRequest 
 
 . 
 newBuilder 
 (). 
 addFeatures 
 ( 
 feat 
 ). 
 setImage 
 ( 
 img 
 ). 
 build 
 (); 
  
 requests 
 . 
 add 
 ( 
 request 
 ); 
  
 // Initialize client that will be used to send requests. This client only needs to be created 
  
 // once, and can be reused for multiple requests. After completing all of your requests, call 
  
 // the "close" method on the client to safely clean up any remaining background resources. 
  
 try 
  
 ( 
  ImageAnnotatorClient 
 
  
 client 
  
 = 
  
  ImageAnnotatorClient 
 
 . 
 create 
 ()) 
  
 { 
  
  BatchAnnotateImagesResponse 
 
  
 response 
  
 = 
  
 client 
 . 
 batchAnnotateImages 
 ( 
 requests 
 ); 
  
 List<AnnotateImageResponse> 
  
 responses 
  
 = 
  
 response 
 . 
  getResponsesList 
 
 (); 
  
 for 
  
 ( 
  AnnotateImageResponse 
 
  
 res 
  
 : 
  
 responses 
 ) 
  
 { 
  
 if 
  
 ( 
 res 
 . 
 hasError 
 ()) 
  
 { 
  
 System 
 . 
 out 
 . 
 format 
 ( 
 "Error: %s%n" 
 , 
  
 res 
 . 
 getError 
 (). 
 getMessage 
 ()); 
  
 return 
 ; 
  
 } 
  
 // For full list of available annotations, see http://g.co/cloud/vision/docs 
  
  SafeSearchAnnotation 
 
  
 annotation 
  
 = 
  
 res 
 . 
 getSafeSearchAnnotation 
 (); 
  
 System 
 . 
 out 
 . 
 format 
 ( 
  
 "adult: %s%nmedical: %s%nspoofed: %s%nviolence: %s%nracy: %s%n" 
 , 
  
 annotation 
 . 
  getAdult 
 
 (), 
  
 annotation 
 . 
  getMedical 
 
 (), 
  
 annotation 
 . 
  getSpoof 
 
 (), 
  
 annotation 
 . 
  getViolence 
 
 (), 
  
 annotation 
 . 
  getRacy 
 
 ()); 
  
 } 
  
 } 
  
 } 
 } 
 

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Node.js API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  const 
  
 vision 
  
 = 
  
 require 
 ( 
 ' @google-cloud/vision 
' 
 ); 
 // Creates a client 
 const 
  
 client 
  
 = 
  
 new 
  
 vision 
 . 
  ImageAnnotatorClient 
 
 (); 
 /** 
 * TODO(developer): Uncomment the following line before running the sample. 
 */ 
 // const fileName = 'Local image file, e.g. /path/to/image.png'; 
 // Performs safe search detection on the local file 
 const 
  
 [ 
 result 
 ] 
  
 = 
  
 await 
  
 client 
 . 
 safeSearchDetection 
 ( 
 fileName 
 ); 
 const 
  
 detections 
  
 = 
  
  result 
 
 . 
 safeSearchAnnotation 
 ; 
 console 
 . 
 log 
 ( 
 'Safe search:' 
 ); 
 console 
 . 
 log 
 ( 
 `Adult: 
 ${ 
 detections 
 . 
 adult 
 } 
 ` 
 ); 
 console 
 . 
 log 
 ( 
 `Medical: 
 ${ 
 detections 
 . 
 medical 
 } 
 ` 
 ); 
 console 
 . 
 log 
 ( 
 `Spoof: 
 ${ 
 detections 
 . 
 spoof 
 } 
 ` 
 ); 
 console 
 . 
 log 
 ( 
 `Violence: 
 ${ 
 detections 
 . 
 violence 
 } 
 ` 
 ); 
 console 
 . 
 log 
 ( 
 `Racy: 
 ${ 
 detections 
 . 
 racy 
 } 
 ` 
 ); 
 

Python

Before trying this sample, follow the Python setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Python API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  def 
  
 detect_safe_search 
 ( 
 path 
 ): 
  
 """Detects unsafe features in the file.""" 
 from 
  
 google.cloud 
  
 import 
 vision 
 client 
 = 
 vision 
 . 
  ImageAnnotatorClient 
 
 () 
 with 
 open 
 ( 
 path 
 , 
 "rb" 
 ) 
 as 
 image_file 
 : 
 content 
 = 
 image_file 
 . 
 read 
 () 
 image 
 = 
 vision 
 . 
  Image 
 
 ( 
 content 
 = 
 content 
 ) 
 response 
 = 
 client 
 . 
 safe_search_detection 
 ( 
 image 
 = 
 image 
 ) 
 safe 
 = 
 response 
 . 
 safe_search_annotation 
 # Names of likelihood from google.cloud.vision.enums 
 likelihood_name 
 = 
 ( 
 "UNKNOWN" 
 , 
 "VERY_UNLIKELY" 
 , 
 "UNLIKELY" 
 , 
 "POSSIBLE" 
 , 
 "LIKELY" 
 , 
 "VERY_LIKELY" 
 , 
 ) 
 print 
 ( 
 "Safe search:" 
 ) 
 print 
 ( 
 f 
 "adult: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 adult 
 ] 
 } 
 " 
 ) 
 print 
 ( 
 f 
 "medical: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 medical 
 ] 
 } 
 " 
 ) 
 print 
 ( 
 f 
 "spoofed: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 spoof 
 ] 
 } 
 " 
 ) 
 print 
 ( 
 f 
 "violence: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 violence 
 ] 
 } 
 " 
 ) 
 print 
 ( 
 f 
 "racy: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 racy 
 ] 
 } 
 " 
 ) 
 if 
 response 
 . 
 error 
 . 
 message 
 : 
 raise 
 Exception 
 ( 
 " 
 {} 
 \n 
 For more info on error messages, check: " 
 "https://cloud.google.com/apis/design/errors" 
 . 
 format 
 ( 
 response 
 . 
 error 
 . 
 message 
 ) 
 ) 
 

Explicit content detection on a remote image

You can use the Vision API to perform feature detection on a remote image file that is located in Cloud Storage or on the Web. To send a remote file request, specify the file's Web URL or Cloud Storage URI in the request body.

REST

Before using any of the request data, make the following replacements:

  • CLOUD_STORAGE_IMAGE_URI : the path to a valid image file in a Cloud Storage bucket. You must at least have read privileges to the file. Example:
    • gs://my-storage-bucket/img/image1.png
  • PROJECT_ID : Your Google Cloud project ID.

HTTP method and URL:

POST https://vision.googleapis.com/v1/images:annotate

Request JSON body:

{
  "requests": [
    {
      "image": {
        "source": {
          "imageUri": " CLOUD_STORAGE_IMAGE_URI 
"
        }
      },
      "features": [
        {
          "type": "SAFE_SEARCH_DETECTION"
        }
      ]
    }
  ]
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json , and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "x-goog-user-project: PROJECT_ID " \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://vision.googleapis.com/v1/images:annotate"

PowerShell

Save the request body in a file named request.json , and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred"; "x-goog-user-project" = " PROJECT_ID " }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://vision.googleapis.com/v1/images:annotate" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{
  "responses": [
    {
      "safeSearchAnnotation": {
        "adult": "UNLIKELY",
        "spoof": "VERY_UNLIKELY",
        "medical": "VERY_UNLIKELY",
        "violence": " LIKELY 
",
        "racy": " POSSIBLE 
"
      }
    }
  ]
}

Go

Before trying this sample, follow the Go setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Go API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  // detectSafeSearch gets image properties from the Vision API for an image at the given file path. 
 func 
  
 detectSafeSearchURI 
 ( 
 w 
  
 io 
 . 
 Writer 
 , 
  
 file 
  
 string 
 ) 
  
 error 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 vision 
 . 
 NewImageAnnotatorClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 image 
  
 := 
  
 vision 
 . 
 NewImageFromURI 
 ( 
 file 
 ) 
  
 props 
 , 
  
 err 
  
 := 
  
 client 
 . 
 DetectSafeSearch 
 ( 
 ctx 
 , 
  
 image 
 , 
  
 nil 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Safe Search properties:" 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Adult:" 
 , 
  
 props 
 . 
 Adult 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Medical:" 
 , 
  
 props 
 . 
 Medical 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Racy:" 
 , 
  
 props 
 . 
 Racy 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Spoofed:" 
 , 
  
 props 
 . 
 Spoof 
 ) 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 "Violence:" 
 , 
  
 props 
 . 
 Violence 
 ) 
  
 return 
  
 nil 
 } 
 

Java

Before trying this sample, follow the Java setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Java API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  import 
  
 com.google.cloud.vision.v1. AnnotateImageRequest 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. AnnotateImageResponse 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. BatchAnnotateImagesResponse 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. Feature 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. Feature 
.Type 
 ; 
 import 
  
 com.google.cloud.vision.v1. Image 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. ImageAnnotatorClient 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. ImageSource 
 
 ; 
 import 
  
 com.google.cloud.vision.v1. SafeSearchAnnotation 
 
 ; 
 import 
  
 java.io.IOException 
 ; 
 import 
  
 java.util.ArrayList 
 ; 
 import 
  
 java.util.List 
 ; 
 public 
  
 class 
 DetectSafeSearchGcs 
  
 { 
  
 public 
  
 static 
  
 void 
  
 detectSafeSearchGcs 
 () 
  
 throws 
  
 IOException 
  
 { 
  
 // TODO(developer): Replace these variables before running the sample. 
  
 String 
  
 filePath 
  
 = 
  
 "gs://your-gcs-bucket/path/to/image/file.jpg" 
 ; 
  
 detectSafeSearchGcs 
 ( 
 filePath 
 ); 
  
 } 
  
 // Detects whether the specified image on Google Cloud Storage has features you would want to 
  
 // moderate. 
  
 public 
  
 static 
  
 void 
  
 detectSafeSearchGcs 
 ( 
 String 
  
 gcsPath 
 ) 
  
 throws 
  
 IOException 
  
 { 
  
 List<AnnotateImageRequest> 
  
 requests 
  
 = 
  
 new 
  
 ArrayList 
<> (); 
  
  ImageSource 
 
  
 imgSource 
  
 = 
  
  ImageSource 
 
 . 
 newBuilder 
 (). 
  setGcsImageUri 
 
 ( 
 gcsPath 
 ). 
 build 
 (); 
  
  Image 
 
  
 img 
  
 = 
  
  Image 
 
 . 
 newBuilder 
 (). 
  setSource 
 
 ( 
 imgSource 
 ). 
 build 
 (); 
  
  Feature 
 
  
 feat 
  
 = 
  
  Feature 
 
 . 
 newBuilder 
 (). 
 setType 
 ( 
 Type 
 . 
 SAFE_SEARCH_DETECTION 
 ). 
 build 
 (); 
  
  AnnotateImageRequest 
 
  
 request 
  
 = 
  
  AnnotateImageRequest 
 
 . 
 newBuilder 
 (). 
 addFeatures 
 ( 
 feat 
 ). 
 setImage 
 ( 
 img 
 ). 
 build 
 (); 
  
 requests 
 . 
 add 
 ( 
 request 
 ); 
  
 // Initialize client that will be used to send requests. This client only needs to be created 
  
 // once, and can be reused for multiple requests. After completing all of your requests, call 
  
 // the "close" method on the client to safely clean up any remaining background resources. 
  
 try 
  
 ( 
  ImageAnnotatorClient 
 
  
 client 
  
 = 
  
  ImageAnnotatorClient 
 
 . 
 create 
 ()) 
  
 { 
  
  BatchAnnotateImagesResponse 
 
  
 response 
  
 = 
  
 client 
 . 
 batchAnnotateImages 
 ( 
 requests 
 ); 
  
 List<AnnotateImageResponse> 
  
 responses 
  
 = 
  
 response 
 . 
  getResponsesList 
 
 (); 
  
 for 
  
 ( 
  AnnotateImageResponse 
 
  
 res 
  
 : 
  
 responses 
 ) 
  
 { 
  
 if 
  
 ( 
 res 
 . 
 hasError 
 ()) 
  
 { 
  
 System 
 . 
 out 
 . 
 format 
 ( 
 "Error: %s%n" 
 , 
  
 res 
 . 
 getError 
 (). 
 getMessage 
 ()); 
  
 return 
 ; 
  
 } 
  
 // For full list of available annotations, see http://g.co/cloud/vision/docs 
  
  SafeSearchAnnotation 
 
  
 annotation 
  
 = 
  
 res 
 . 
 getSafeSearchAnnotation 
 (); 
  
 System 
 . 
 out 
 . 
 format 
 ( 
  
 "adult: %s%nmedical: %s%nspoofed: %s%nviolence: %s%nracy: %s%n" 
 , 
  
 annotation 
 . 
  getAdult 
 
 (), 
  
 annotation 
 . 
  getMedical 
 
 (), 
  
 annotation 
 . 
  getSpoof 
 
 (), 
  
 annotation 
 . 
  getViolence 
 
 (), 
  
 annotation 
 . 
  getRacy 
 
 ()); 
  
 } 
  
 } 
  
 } 
 } 
 

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Node.js API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  // Imports the Google Cloud client libraries 
 const 
  
 vision 
  
 = 
  
 require 
 ( 
 ' @google-cloud/vision 
' 
 ); 
 // Creates a client 
 const 
  
 client 
  
 = 
  
 new 
  
 vision 
 . 
  ImageAnnotatorClient 
 
 (); 
 /** 
 * TODO(developer): Uncomment the following lines before running the sample. 
 */ 
 // const bucketName = 'Bucket where the file resides, e.g. my-bucket'; 
 // const fileName = 'Path to file within bucket, e.g. path/to/image.png'; 
 // Performs safe search property detection on the remote file 
 const 
  
 [ 
 result 
 ] 
  
 = 
  
 await 
  
 client 
 . 
 safeSearchDetection 
 ( 
  
 `gs:// 
 ${ 
 bucketName 
 } 
 / 
 ${ 
 fileName 
 } 
 ` 
 ); 
 const 
  
 detections 
  
 = 
  
  result 
 
 . 
 safeSearchAnnotation 
 ; 
 console 
 . 
 log 
 ( 
 `Adult: 
 ${ 
 detections 
 . 
 adult 
 } 
 ` 
 ); 
 console 
 . 
 log 
 ( 
 `Spoof: 
 ${ 
 detections 
 . 
 spoof 
 } 
 ` 
 ); 
 console 
 . 
 log 
 ( 
 `Medical: 
 ${ 
 detections 
 . 
 medical 
 } 
 ` 
 ); 
 console 
 . 
 log 
 ( 
 `Violence: 
 ${ 
 detections 
 . 
 violence 
 } 
 ` 
 ); 
 

Python

Before trying this sample, follow the Python setup instructions in the Vision quickstart using client libraries . For more information, see the Vision Python API reference documentation .

To authenticate to Vision, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

  def 
  
 detect_safe_search_uri 
 ( 
 uri 
 ): 
  
 """Detects unsafe features in the file located in Google Cloud Storage or 
 on the Web.""" 
 from 
  
 google.cloud 
  
 import 
 vision 
 client 
 = 
 vision 
 . 
  ImageAnnotatorClient 
 
 () 
 image 
 = 
 vision 
 . 
  Image 
 
 () 
 image 
 . 
 source 
 . 
 image_uri 
 = 
 uri 
 response 
 = 
 client 
 . 
 safe_search_detection 
 ( 
 image 
 = 
 image 
 ) 
 safe 
 = 
 response 
 . 
 safe_search_annotation 
 # Names of likelihood from google.cloud.vision.enums 
 likelihood_name 
 = 
 ( 
 "UNKNOWN" 
 , 
 "VERY_UNLIKELY" 
 , 
 "UNLIKELY" 
 , 
 "POSSIBLE" 
 , 
 "LIKELY" 
 , 
 "VERY_LIKELY" 
 , 
 ) 
 print 
 ( 
 "Safe search:" 
 ) 
 print 
 ( 
 f 
 "adult: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 adult 
 ] 
 } 
 " 
 ) 
 print 
 ( 
 f 
 "medical: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 medical 
 ] 
 } 
 " 
 ) 
 print 
 ( 
 f 
 "spoofed: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 spoof 
 ] 
 } 
 " 
 ) 
 print 
 ( 
 f 
 "violence: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 violence 
 ] 
 } 
 " 
 ) 
 print 
 ( 
 f 
 "racy: 
 { 
 likelihood_name 
 [ 
 safe 
 . 
 racy 
 ] 
 } 
 " 
 ) 
 if 
 response 
 . 
 error 
 . 
 message 
 : 
 raise 
 Exception 
 ( 
 " 
 {} 
 \n 
 For more info on error messages, check: " 
 "https://cloud.google.com/apis/design/errors" 
 . 
 format 
 ( 
 response 
 . 
 error 
 . 
 message 
 ) 
 ) 
 

gcloud

To perform SafeSearch detection, use the gcloud ml vision detect-safe-search command as shown in the following example:

gcloud ml vision detect-safe-search gs:// my_bucket 
/ input_file 
Design a Mobile Site
View Site in Mobile | Classic
Share by: