Use models

Use a trained Custom Speech-to-Text model in your production application or benchmarking workflows. As soon as you deploy your model through a dedicated endpoint, you automatically get programmatic access through a recognizer object, which can be used directly through the Speech-to-Text V2 API or in the Google Cloud console.

Before you begin

Ensure you have signed up for a Google Cloud account, created a project, trained a custom speech model, and deployed it using an endpoint.

Perform inference in V2

For a Custom Speech-to-Text model to be ready for use, the state of the model in the Modelstab should be Active, and the dedicated endpoint in the Endpointstab must be Deployed.

In our example, where a Google Cloud project ID is custom-models-walkthrough , the endpoint that corresponds to the Custom Speech-to-Text model quantum-computing-lectures-custom-model is quantum-computing-lectures-custom-model-prod-endpoint . The region that it's available is us-east1 , and the batch transcription request is the following:

  from 
  
 google.api_core 
  
 import 
 client_options 
 from 
  
 google.cloud.speech_v2 
  
 import 
 SpeechClient 
 from 
  
 google.cloud.speech_v2.types 
  
 import 
 cloud_speech 
 def 
  
 quickstart_v2 
 ( 
 project_id 
 : 
 str 
 , 
 audio_file 
 : 
 str 
 , 
 ) 
 - 
> cloud_speech 
 . 
 RecognizeResponse 
 : 
  
 """Transcribe an audio file.""" 
 # Instantiates a client 
 client 
 = 
 SpeechClient 
 ( 
 client_options 
 = 
 client_options 
 . 
 ClientOptions 
 ( 
 api_endpoint 
 = 
 "us-east1-speech.googleapis.com" 
 ) 
 ) 
 # Reads a file as bytes 
 with 
 open 
 ( 
 audio_file 
 , 
 "rb" 
 ) 
 as 
 f 
 : 
 content 
 = 
 f 
 . 
 read 
 () 
 config 
 = 
 cloud_speech 
 . 
 RecognitionConfig 
 ( 
 auto_decoding_config 
 = 
 cloud_speech 
 . 
  AutoDetectDecodingConfig 
 
 (), 
 language_codes 
 = 
 [ 
 "en-US" 
 ], 
 model 
 = 
 "projects/custom-models-walkthrough/locations/us-east1/endpoints/quantum-computing-lectures-custom-model-prod-endpoint" 
 , 
 ) 
 request 
 = 
 cloud_speech 
 . 
 RecognizeRequest 
 ( 
 recognizer 
 = 
 f 
 "projects/custom-models-walkthrough/locations/us-east1/recognizers/_" 
 , 
 config 
 = 
 config 
 , 
 content 
 = 
 content 
 , 
 ) 
 # Transcribes the audio into text 
 response 
 = 
 client 
 . 
  recognize 
 
 ( 
 request 
 = 
 request 
 ) 
 for 
 result 
 in 
 response 
 . 
 results 
 : 
 print 
 ( 
 f 
 "Transcript: 
 { 
 result 
 . 
 alternatives 
 [ 
 0 
 ] 
 . 
 transcript 
 } 
 " 
 ) 
 return 
 response 
 

What's next

Follow the resources to take advantage of custom speech models in your application. See Evaluate your custom models .

Design a Mobile Site
View Site in Mobile | Classic
Share by: