Index
-
Adaptation
(interface) -
Speech
(interface) -
CreateCustomClassRequest
(message) -
CreatePhraseSetRequest
(message) -
CustomClass
(message) -
CustomClass.ClassItem
(message) -
DeleteCustomClassRequest
(message) -
DeletePhraseSetRequest
(message) -
GetCustomClassRequest
(message) -
GetPhraseSetRequest
(message) -
ListCustomClassesRequest
(message) -
ListCustomClassesResponse
(message) -
ListPhraseSetRequest
(message) -
ListPhraseSetResponse
(message) -
LongRunningRecognizeMetadata
(message) -
LongRunningRecognizeRequest
(message) -
LongRunningRecognizeResponse
(message) -
PhraseSet
(message) -
PhraseSet.Phrase
(message) -
RecognitionAudio
(message) -
RecognitionConfig
(message) -
RecognitionConfig.AudioEncoding
(enum) -
RecognitionMetadata
(message) (deprecated) -
RecognitionMetadata.InteractionType
(enum) -
RecognitionMetadata.MicrophoneDistance
(enum) -
RecognitionMetadata.OriginalMediaType
(enum) -
RecognitionMetadata.RecordingDeviceType
(enum) -
RecognizeRequest
(message) -
RecognizeResponse
(message) -
SpeakerDiarizationConfig
(message) -
SpeechAdaptation
(message) -
SpeechAdaptation.ABNFGrammar
(message) -
SpeechAdaptationInfo
(message) -
SpeechContext
(message) -
SpeechRecognitionAlternative
(message) -
SpeechRecognitionResult
(message) -
StreamingRecognitionConfig
(message) -
StreamingRecognitionConfig.VoiceActivityTimeout
(message) -
StreamingRecognitionResult
(message) -
StreamingRecognizeRequest
(message) -
StreamingRecognizeResponse
(message) -
StreamingRecognizeResponse.SpeechEventType
(enum) -
TranscriptOutputConfig
(message) -
UpdateCustomClassRequest
(message) -
UpdatePhraseSetRequest
(message) -
WordInfo
(message)
Adaptation
Service that implements Google Cloud Speech Adaptation API.
rpc CreateCustomClass(
CreateCustomClassRequest
) returns ( CustomClass
)
Create a custom class.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc CreatePhraseSet(
CreatePhraseSetRequest
) returns ( PhraseSet
)
Create a set of phrase hints. Each item in the set can be a single word or a multi-word phrase. The items in the PhraseSet are favored by the recognition model when you send a call that includes the PhraseSet.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc DeleteCustomClass(
DeleteCustomClassRequest
) returns ( Empty
)
Delete a custom class.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc DeletePhraseSet(
DeletePhraseSetRequest
) returns ( Empty
)
Delete a phrase set.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc GetCustomClass(
GetCustomClassRequest
) returns ( CustomClass
)
Get a custom class.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc GetPhraseSet(
GetPhraseSetRequest
) returns ( PhraseSet
)
Get a phrase set.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc ListCustomClasses(
ListCustomClassesRequest
) returns ( ListCustomClassesResponse
)
List custom classes.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc ListPhraseSet(
ListPhraseSetRequest
) returns ( ListPhraseSetResponse
)
List phrase sets.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc UpdateCustomClass(
UpdateCustomClassRequest
) returns ( CustomClass
)
Update a custom class.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc UpdatePhraseSet(
UpdatePhraseSetRequest
) returns ( PhraseSet
)
Update a phrase set.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
Speech
Service that implements Google Cloud Speech API.
rpc LongRunningRecognize(
LongRunningRecognizeRequest
) returns ( Operation
)
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations interface. Returns either an Operation.error
or an Operation.response
which contains a LongRunningRecognizeResponse
message. For more information on asynchronous speech recognition, see the how-to
.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc Recognize(
RecognizeRequest
) returns ( RecognizeResponse
)
Performs synchronous speech recognition: receive results after all audio has been sent and processed.
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc StreamingRecognize(
StreamingRecognizeRequest
) returns ( StreamingRecognizeResponse
)
Performs bidirectional streaming speech recognition: receive results while sending audio. This method is only available via the gRPC API (not REST).
- Authorization Scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
CreateCustomClassRequest
Message sent by the client for the CreateCustomClass
method.
parent
string
Required. The parent resource where this custom class will be created. Format:
projects/{project}/locations/{location}/customClasses
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Authorization requires the following IAM
permission on the specified resource parent
:
-
speech.customClasses.create
custom_class_id
string
Required. The ID to use for the custom class, which will become the final component of the custom class' resource name.
This value should restrict to letters, numbers, and hyphens, with the first character a letter, the last a letter or a number, and be 4-63 characters.
custom_class
Required. The custom class to create.
CreatePhraseSetRequest
Message sent by the client for the CreatePhraseSet
method.
parent
string
Required. The parent resource where this phrase set will be created. Format:
projects/{project}/locations/{location}/phraseSets
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Authorization requires the following IAM
permission on the specified resource parent
:
-
speech.phraseSets.create
phrase_set_id
string
Required. The ID to use for the phrase set, which will become the final component of the phrase set's resource name.
This value should restrict to letters, numbers, and hyphens, with the first character a letter, the last a letter or a number, and be 4-63 characters.
phrase_set
Required. The phrase set to create.
CustomClass
A set of words or phrases that represents a common concept likely to appear in your audio, for example a list of passenger ship names. CustomClass items can be substituted into placeholders that you set in PhraseSet phrases.
Fields | |
---|---|
name
|
The resource name of the custom class. |
custom_class_id
|
If this custom class is a resource, the custom_class_id is the resource id of the CustomClass. Case sensitive. |
items[]
|
A collection of class items. |
ClassItem
An item of the class.
Fields | |
---|---|
value
|
The class item's value. |
DeleteCustomClassRequest
Message sent by the client for the DeleteCustomClass
method.
name
string
Required. The name of the custom class to delete. Format:
projects/{project}/locations/{location}/customClasses/{custom_class}
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Authorization requires the following IAM
permission on the specified resource name
:
-
speech.customClasses.delete
DeletePhraseSetRequest
Message sent by the client for the DeletePhraseSet
method.
name
string
Required. The name of the phrase set to delete. Format:
projects/{project}/locations/{location}/phraseSets/{phrase_set}
Authorization requires the following IAM
permission on the specified resource name
:
-
speech.phraseSets.delete
GetCustomClassRequest
Message sent by the client for the GetCustomClass
method.
name
string
Required. The name of the custom class to retrieve. Format:
projects/{project}/locations/{location}/customClasses/{custom_class}
Authorization requires the following IAM
permission on the specified resource name
:
-
speech.customClasses.get
GetPhraseSetRequest
Message sent by the client for the GetPhraseSet
method.
name
string
Required. The name of the phrase set to retrieve. Format:
projects/{project}/locations/{location}/phraseSets/{phrase_set}
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Authorization requires the following IAM
permission on the specified resource name
:
-
speech.phraseSets.get
ListCustomClassesRequest
Message sent by the client for the ListCustomClasses
method.
parent
string
Required. The parent, which owns this collection of custom classes. Format:
projects/{project}/locations/{location}/customClasses
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Authorization requires the following IAM
permission on the specified resource parent
:
-
speech.customClasses.list
page_size
int32
The maximum number of custom classes to return. The service may return fewer than this value. If unspecified, at most 50 custom classes will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.
page_token
string
A page token, received from a previous ListCustomClass
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to ListCustomClass
must match the call that provided the page token.
ListCustomClassesResponse
Message returned to the client by the ListCustomClasses
method.
Fields | |
---|---|
custom_classes[]
|
The custom classes. |
next_page_token
|
A token, which can be sent as |
ListPhraseSetRequest
Message sent by the client for the ListPhraseSet
method.
parent
string
Required. The parent, which owns this collection of phrase set. Format:
projects/{project}/locations/{location}
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Authorization requires the following IAM
permission on the specified resource parent
:
-
speech.phraseSets.list
page_size
int32
The maximum number of phrase sets to return. The service may return fewer than this value. If unspecified, at most 50 phrase sets will be returned. The maximum value is 1000; values above 1000 will be coerced to 1000.
page_token
string
A page token, received from a previous ListPhraseSet
call. Provide this to retrieve the subsequent page.
When paginating, all other parameters provided to ListPhraseSet
must match the call that provided the page token.
ListPhraseSetResponse
Message returned to the client by the ListPhraseSet
method.
Fields | |
---|---|
phrase_sets[]
|
The phrase set. |
next_page_token
|
A token, which can be sent as |
LongRunningRecognizeMetadata
Describes the progress of a long-running LongRunningRecognize
call. It is included in the metadata
field of the Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Fields | |
---|---|
progress_percent
|
Approximate percentage of audio processed thus far. Guaranteed to be 100 when the audio is fully processed and the results are available. |
start_time
|
Time when the request was received. |
last_update_time
|
Time of the most recent processing update. |
uri
|
Output only. The URI of the audio file being transcribed. Empty if the audio was sent as byte content. |
LongRunningRecognizeRequest
The top-level message sent by the client for the LongRunningRecognize
method.
Fields | |
---|---|
config
|
Required. Provides information to the recognizer that specifies how to process the request. |
audio
|
Required. The audio data to be recognized. |
output_config
|
Optional. Specifies an optional destination for the recognition results. |
LongRunningRecognizeResponse
The only message returned to the client by the LongRunningRecognize
method. It contains the result as zero or more sequential SpeechRecognitionResult
messages. It is included in the result.response
field of the Operation
returned by the GetOperation
call of the google::longrunning::Operations
service.
Fields | |
---|---|
results[]
|
Sequential list of transcription results corresponding to sequential portions of audio. |
total_billed_time
|
When available, billed audio seconds for the corresponding request. |
output_config
|
Original output config if present in the request. |
output_error
|
If the transcript output fails this field contains the relevant error. |
speech_adaptation_info
|
Provides information on speech adaptation behavior in response |
request_id
|
The ID associated with the request. This is a unique ID specific only to the given request. |
PhraseSet
Provides "hints" to the speech recognizer to favor specific words and phrases in the results.
Fields | |
---|---|
name
|
The resource name of the phrase set. |
phrases[]
|
A list of word and phrases. |
boost
|
Hint Boost. Positive value will increase the probability that a specific phrase will be recognized over other similar sounding phrases. The higher the boost, the higher the chance of false positive recognition as well. Negative boost values would correspond to anti-biasing. Anti-biasing is not enabled, so negative boost will simply be ignored. Though |
Phrase
A phrases containing words and phrase "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits .
List items can also include pre-built or custom classes containing groups of words that represent common concepts that occur in natural language. For example, rather than providing a phrase hint for every month of the year (e.g. "i was born in january", "i was born in febuary", ...), use the pre-built $MONTH
class improves the likelihood of correctly transcribing audio that includes months (e.g. "i was born in $month"). To refer to pre-built classes, use the class' symbol prepended with $
e.g. $MONTH
. To refer to custom classes that were defined inline in the request, set the class's custom_class_id
to a string unique to all class resources and inline classes. Then use the class' id wrapped in $ {...}
e.g. "${my-months}". To refer to custom classes resources, use the class' id wrapped in ${}
(e.g. ${my-months}
).
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Fields | |
---|---|
value
|
The phrase itself. |
boost
|
Hint Boost. Overrides the boost set at the phrase set level. Positive value will increase the probability that a specific phrase will be recognized over other similar sounding phrases. The higher the boost, the higher the chance of false positive recognition as well. Negative boost will simply be ignored. Though |
RecognitionAudio
Contains audio data in the encoding specified in the RecognitionConfig
. Either content
or uri
must be supplied. Supplying both or neither returns google.rpc.Code.INVALID_ARGUMENT
. See content limits
.
audio_source
. The audio source, which is either inline content or a Google Cloud Storage uri. audio_source
can be only one of the following:content
bytes
The audio data bytes encoded as specified in RecognitionConfig
. Note: as with all bytes fields, proto buffers use a pure binary representation, whereas JSON representations use base64.
uri
string
URI that points to a file that contains audio data bytes as specified in RecognitionConfig
. The file must not be compressed (for example, gzip). Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket_name/object_name
(other URI formats return google.rpc.Code.INVALID_ARGUMENT
). For more information, see Request URIs
.
RecognitionConfig
Provides information to the recognizer that specifies how to process the request.
encoding
Encoding of audio data sent in all RecognitionAudio
messages. This field is optional for FLAC
and WAV
audio files and required for all other audio formats. For details, see AudioEncoding
.
sample_rate_hertz
int32
Sample rate in Hertz of the audio data sent in all RecognitionAudio
messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is optional for FLAC and WAV audio files, but is required for all other audio formats. For details, see AudioEncoding
.
audio_channel_count
int32
The number of channels in the input audio data. ONLY set this for MULTI-CHANNEL recognition. Valid values for LINEAR16, OGG_OPUS and FLAC are 1
- 8
. Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only 1
. If 0
or omitted, defaults to one channel (mono). Note: We only recognize the first channel by default. To perform independent recognition on each channel set enable_separate_recognition_per_channel
to 'true'.
enable_separate_recognition_per_channel
bool
This needs to be set to true
explicitly and audio_channel_count
> 1 to get each channel recognized separately. The recognition result will contain a channel_tag
field to state which channel that result belongs to. If this is not true, we will only recognize the first channel. The request is billed cumulatively for all channels recognized: audio_channel_count
multiplied by the length of the audio.
language_code
string
Required. The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes.
alternative_language_codes[]
string
A list of up to 3 additional BCP-47 language tags, listing possible alternative languages of the supplied audio. See Language Support for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main language_code. The recognition result will include the language tag of the language detected in the audio. Note: This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
max_alternatives
int32
Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of SpeechRecognitionAlternative
messages within each SpeechRecognitionResult
. The server may return fewer than max_alternatives
. Valid values are 0
- 30
. A value of 0
or 1
will return a maximum of one. If omitted, will return a maximum of one.
profanity_filter
bool
If set to true
, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to false
or omitted, profanities won't be filtered out.
adaptation
Speech adaptation configuration improves the accuracy of speech recognition. For more information, see the speech adaptation
documentation. When speech adaptation is set it supersedes the speech_contexts
field.
speech_contexts[]
Array of SpeechContext
. A means to provide context to assist the speech recognition. For more information, see speech adaptation
.
enable_word_time_offsets
bool
If true
, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If false
, no word-level time offset information is returned. The default is false
.
enable_word_confidence
bool
If true
, the top result includes a list of words and the confidence for those words. If false
, no word-level confidence information is returned. The default is false
.
enable_automatic_punctuation
bool
If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses.
enable_spoken_punctuation
The spoken punctuation behavior for the call If not set, uses default behavior based on model of choice e.g. command_and_search will enable spoken punctuation by default If 'true', replaces spoken punctuation with the corresponding symbols in the request. For example, "how are you question mark" becomes "how are you?". See https://cloud.google.com/speech-to-text/docs/spoken-punctuation for support. If 'false', spoken punctuation is not replaced.
enable_spoken_emojis
The spoken emoji behavior for the call If not set, uses default behavior based on model of choice If 'true', adds spoken emoji formatting for the request. This will replace spoken emojis with the corresponding Unicode symbols in the final transcript. If 'false', spoken emojis are not replaced.
diarization_config
Config to enable speaker diarization and set additional parameters to make diarization better suited for your application. Note: When this is enabled, we send all the words from the beginning of the audio for the top alternative in every consecutive STREAMING responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time. For non-streaming requests, the diarization results will be provided only in the top alternative of the FINAL SpeechRecognitionResult.
model
string
Which model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the RecognitionConfig.
Model | Description |
|
Best for long form content like media or conversation. |
|
Best for short form content like commands or single shot directed speech. |
|
Best for short queries such as voice commands or voice search. |
|
Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate). |
|
Best for audio that originated from video or includes multiple speakers. Ideally the audio is recorded at a 16khz or greater sampling rate. This is a premium model that costs more than the standard rate. |
|
Best for audio that is not one of the specific audio models. For example, long-form audio. Ideally the audio is high-fidelity, recorded at a 16khz or greater sampling rate. |
|
Best for audio that originated from a conversation between a medical provider and patient. |
|
Best for audio that originated from dictation notes by a medical provider. |
use_enhanced
bool
Set to true to use an enhanced model for speech recognition. If use_enhanced
is set to true and the model
field is not set, then an appropriate enhanced model is chosen if an enhanced model exists for the audio.
If use_enhanced
is true and an enhanced version of the specified model does not exist, then the speech is recognized using the standard version of the specified model.
AudioEncoding
The encoding of the audio data sent in the request.
All encodings support only 1 channel (mono) audio, unless the audio_channel_count
and enable_separate_recognition_per_channel
fields are set.
For best results, the audio source should be captured and transmitted using a lossless encoding ( FLAC
or LINEAR16
). The accuracy of the speech recognition can be reduced if lossy codecs are used to capture or transmit audio, particularly if background noise is present. Lossy codecs include MULAW
, AMR
, AMR_WB
, OGG_OPUS
, SPEEX_WITH_HEADER_BYTE
, MP3
, and WEBM_OPUS
.
The FLAC
and WAV
audio file formats include a header that describes the included audio content. You can request recognition for WAV
files that contain either LINEAR16
or MULAW
encoded audio. If you send FLAC
or WAV
audio file format in your request, you do not need to specify an AudioEncoding
; the audio encoding format is determined from the file header. If you specify an AudioEncoding
when you send send FLAC
or WAV
audio, the encoding configuration must match the encoding described in the audio header; otherwise the request returns an google.rpc.Code.INVALID_ARGUMENT
error code.
Enums | |
---|---|
ENCODING_UNSPECIFIED
|
Not specified. |
LINEAR16
|
Uncompressed 16-bit signed little-endian samples (Linear PCM). |
FLAC
|
FLAC
(Free Lossless Audio Codec) is the recommended encoding because it is lossless--therefore recognition is not compromised--and requires only about half the bandwidth of LINEAR16
. FLAC
stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO
are supported. |
MULAW
|
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law. |
AMR
|
Adaptive Multi-Rate Narrowband codec. sample_rate_hertz
must be 8000. |
AMR_WB
|
Adaptive Multi-Rate Wideband codec. sample_rate_hertz
must be 16000. |
OGG_OPUS
|
Opus encoded audio frames in Ogg container ( OggOpus
). sample_rate_hertz
must be one of 8000, 12000, 16000, 24000, or 48000. |
SPEEX_WITH_HEADER_BYTE
|
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS
is highly preferred over Speex encoding. The Speex
encoding supported by Cloud Speech API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte
. It is a variant of the RTP Speex encoding defined in RFC 5574
. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sample_rate_hertz
must be 16000. |
WEBM_OPUS
|
Opus encoded audio frames in WebM container ( OggOpus
). sample_rate_hertz
must be one of 8000, 12000, 16000, 24000, or 48000. |
RecognitionMetadata
Description of audio data to be recognized.
Fields | |
---|---|
interaction_type
|
The use case most closely describing the audio content to be recognized. |
industry_naics_code_of_audio
|
The industry vertical to which this speech recognition request most closely applies. This is most indicative of the topics contained in the audio. Use the 6-digit NAICS code to identify the industry vertical - see https://www.naics.com/search/ . |
microphone_distance
|
The audio type that most closely describes the audio being recognized. |
original_media_type
|
The original media the speech was recorded on. |
recording_device_type
|
The type of device the speech was recorded with. |
recording_device_name
|
The device used to make the recording. Examples 'Nexus 5X' or 'Polycom SoundStation IP 6000' or 'POTS' or 'VoIP' or 'Cardioid Microphone'. |
original_mime_type
|
Mime type of the original audio file. For example |
audio_topic
|
Description of the content. Eg. "Recordings of federal supreme court hearings from 2012". |
InteractionType
Use case categories that the audio recognition request can be described by.
Enums | |
---|---|
INTERACTION_TYPE_UNSPECIFIED
|
Use case is either unknown or is something other than one of the other values below. |
DISCUSSION
|
Multiple people in a conversation or discussion. For example in a meeting with two or more people actively participating. Typically all the primary people speaking would be in the same room (if not, see PHONE_CALL) |
PRESENTATION
|
One or more persons lecturing or presenting to others, mostly uninterrupted. |
PHONE_CALL
|
A phone-call or video-conference in which two or more people, who are not in the same room, are actively participating. |
VOICEMAIL
|
A recorded message intended for another person to listen to. |
PROFESSIONALLY_PRODUCED
|
Professionally produced audio (eg. TV Show, Podcast). |
VOICE_SEARCH
|
Transcribe spoken questions and queries into text. |
VOICE_COMMAND
|
Transcribe voice commands, such as for controlling a device. |
DICTATION
|
Transcribe speech to text to create a written document, such as a text-message, email or report. |
MicrophoneDistance
Enumerates the types of capture settings describing an audio file.
Enums | |
---|---|
MICROPHONE_DISTANCE_UNSPECIFIED
|
Audio type is not known. |
NEARFIELD
|
The audio was captured from a closely placed microphone. Eg. phone, dictaphone, or handheld microphone. Generally if there speaker is within 1 meter of the microphone. |
MIDFIELD
|
The speaker if within 3 meters of the microphone. |
FARFIELD
|
The speaker is more than 3 meters away from the microphone. |
OriginalMediaType
The original media the speech was recorded on.
Enums | |
---|---|
ORIGINAL_MEDIA_TYPE_UNSPECIFIED
|
Unknown original media type. |
AUDIO
|
The speech data is an audio recording. |
VIDEO
|
The speech data originally recorded on a video. |
RecordingDeviceType
The type of device the speech was recorded with.
Enums | |
---|---|
RECORDING_DEVICE_TYPE_UNSPECIFIED
|
The recording device is unknown. |
SMARTPHONE
|
Speech was recorded on a smartphone. |
PC
|
Speech was recorded using a personal computer or tablet. |
PHONE_LINE
|
Speech was recorded over a phone line. |
VEHICLE
|
Speech was recorded in a vehicle. |
OTHER_OUTDOOR_DEVICE
|
Speech was recorded outdoors. |
OTHER_INDOOR_DEVICE
|
Speech was recorded indoors. |
RecognizeRequest
The top-level message sent by the client for the Recognize
method.
Fields | |
---|---|
config
|
Required. Provides information to the recognizer that specifies how to process the request. |
audio
|
Required. The audio data to be recognized. |
RecognizeResponse
The only message returned to the client by the Recognize
method. It contains the result as zero or more sequential SpeechRecognitionResult
messages.
Fields | |
---|---|
results[]
|
Sequential list of transcription results corresponding to sequential portions of audio. |
total_billed_time
|
When available, billed audio seconds for the corresponding request. |
speech_adaptation_info
|
Provides information on adaptation behavior in response |
request_id
|
The ID associated with the request. This is a unique ID specific only to the given request. |
SpeakerDiarizationConfig
Config to enable speaker diarization.
Fields | |
---|---|
enable_speaker_diarization
|
If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_tag provided in the WordInfo. |
min_speaker_count
|
Minimum number of speakers in the conversation. This range gives you more flexibility by allowing the system to automatically determine the correct number of speakers. If not set, the default value is 2. |
max_speaker_count
|
Maximum number of speakers in the conversation. This range gives you more flexibility by allowing the system to automatically determine the correct number of speakers. If not set, the default value is 6. |
speaker_tag
|
Output only. Unused. |
SpeechAdaptation
Speech adaptation configuration.
Fields | |
---|---|
phrase_sets[]
|
A collection of phrase sets. To specify the hints inline, leave the phrase set's |
phrase_set_references[]
|
A collection of phrase set resource names to use. |
custom_classes[]
|
A collection of custom classes. To specify the classes inline, leave the class' |
abnf_grammar
|
Augmented Backus-Naur form (ABNF) is a standardized grammar notation comprised by a set of derivation rules. See specifications: https://www.w3.org/TR/speech-grammar |
ABNFGrammar
Fields | |
---|---|
abnf_strings[]
|
All declarations and rules of an ABNF grammar broken up into multiple strings that will end up concatenated. |
SpeechAdaptationInfo
Information on speech adaptation use in results
Fields | |
---|---|
adaptation_timeout
|
Whether there was a timeout when applying speech adaptation. If true, adaptation had no effect in the response transcript. |
SpeechContext
Provides "hints" to the speech recognizer to favor specific words and phrases in the results.
Fields | |
---|---|
phrases[]
|
A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits . List items can also be set to classes for groups of words that represent common concepts that occur in natural language. For example, rather than providing phrase hints for every month of the year, using the $MONTH class improves the likelihood of correctly transcribing audio that includes months. |
boost
|
Hint Boost. Positive value will increase the probability that a specific phrase will be recognized over other similar sounding phrases. The higher the boost, the higher the chance of false positive recognition as well. Negative boost values would correspond to anti-biasing. Anti-biasing is not enabled, so negative boost will simply be ignored. Though |
SpeechRecognitionAlternative
Alternative hypotheses (a.k.a. n-best list).
Fields | |
---|---|
transcript
|
Transcript text representing the words that the user spoke. In languages that use spaces to separate words, the transcript might have a leading space if it isn't the first result. You can concatenate each result to obtain the full transcript without using a separator. |
confidence
|
The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is set only for the top alternative of a non-streaming result or, of a streaming result where |
words[]
|
A list of word-specific information for each recognized word. Note: When |
SpeechRecognitionResult
A speech recognition result corresponding to a portion of the audio.
Fields | |
---|---|
alternatives[]
|
May contain one or more recognition hypotheses (up to the maximum specified in |
channel_tag
|
For multi-channel audio, this is the channel number corresponding to the recognized result for the audio from that channel. For audio_channel_count = N, its output values can range from '1' to 'N'. |
result_end_time
|
Time offset of the end of this result relative to the beginning of the audio. |
language_code
|
Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio. |
StreamingRecognitionConfig
Provides information to the recognizer that specifies how to process the request.
config
Required. Provides information to the recognizer that specifies how to process the request.
single_utterance
bool
If false
or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingRecognitionResult
s with the is_final
flag set to true
.
If true
, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE
event and cease recognition. It will return no more than one StreamingRecognitionResult
with the is_final
flag set to true
.
The single_utterance
field can only be used with specified models, otherwise an error is thrown. The model
field in [ RecognitionConfig
][] must be set to:
-
command_and_search
-
phone_call
AND additional fielduseEnhanced
=true
- The
model
field is left undefined. In this case the API auto-selects a model based on any other parameters that you set inRecognitionConfig
.
interim_results
bool
If true
, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the is_final=false
flag). If false
or omitted, only is_final=true
result(s) are returned.
enable_voice_activity_events
bool
If true
, responses with voice activity speech events will be returned as they are detected.
voice_activity_timeout
If set, the server will automatically close the stream after the specified duration has elapsed after the last VOICE_ACTIVITY speech event has been sent. The field voice_activity_events
must also be set to true.
VoiceActivityTimeout
Events that a timeout can be set on for voice activity.
Fields | |
---|---|
speech_start_timeout
|
Duration to timeout the stream if no speech begins. |
speech_end_timeout
|
Duration to timeout the stream after speech ends. |
StreamingRecognitionResult
A streaming speech recognition result corresponding to a portion of the audio that is currently being processed.
Fields | |
---|---|
alternatives[]
|
May contain one or more recognition hypotheses (up to the maximum specified in |
is_final
|
If |
stability
|
An estimate of the likelihood that the recognizer will not change its guess about this interim result. Values range from 0.0 (completely unstable) to 1.0 (completely stable). This field is only provided for interim results ( |
result_end_time
|
Time offset of the end of this result relative to the beginning of the audio. |
channel_tag
|
For multi-channel audio, this is the channel number corresponding to the recognized result for the audio from that channel. For audio_channel_count = N, its output values can range from '1' to 'N'. |
language_code
|
Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio. |
StreamingRecognizeRequest
The top-level message sent by the client for the StreamingRecognize
method. Multiple StreamingRecognizeRequest
messages are sent. The first message must contain a streaming_config
message and must not contain audio_content
. All subsequent messages must contain audio_content
and must not contain a streaming_config
message.
streaming_request
. The streaming request, which is either a streaming config or audio content. streaming_request
can be only one of the following:streaming_config
Provides information to the recognizer that specifies how to process the request. The first StreamingRecognizeRequest
message must contain a streaming_config
message.
audio_content
bytes
The audio data to be recognized. Sequential chunks of audio data are sent in sequential StreamingRecognizeRequest
messages. The first StreamingRecognizeRequest
message must not contain audio_content
data and all subsequent StreamingRecognizeRequest
messages must contain audio_content
data. The audio bytes must be encoded as specified in RecognitionConfig
. Note: as with all bytes fields, proto buffers use a pure binary representation (not base64). See content limits
.
StreamingRecognizeResponse
StreamingRecognizeResponse
is the only message returned to the client by StreamingRecognize
. A series of zero or more StreamingRecognizeResponse
messages are streamed back to the client. If there is no recognizable audio, and single_utterance
is set to false, then no messages are streamed back to the client.
Here's an example of a series of StreamingRecognizeResponse
s that might be returned while processing audio:
-
results { alternatives { transcript: "tube" } stability: 0.01 }
-
results { alternatives { transcript: "to be a" } stability: 0.01 }
-
results { alternatives { transcript: "to be" } stability: 0.9 } results { alternatives { transcript: " or not to be" } stability: 0.01 }
-
results { alternatives { transcript: "to be or not to be" confidence: 0.92 } alternatives { transcript: "to bee or not to bee" } is_final: true }
-
results { alternatives { transcript: " that's" } stability: 0.01 }
-
results { alternatives { transcript: " that is" } stability: 0.9 } results { alternatives { transcript: " the question" } stability: 0.01 }
-
results { alternatives { transcript: " that is the question" confidence: 0.98 } alternatives { transcript: " that was the question" } is_final: true }
Notes:
-
Only two of the above responses #4 and #7 contain final results; they are indicated by
is_final: true
. Concatenating these together generates the full transcript: "to be or not to be that is the question". -
The others contain interim
results
. #3 and #6 contain two interimresults
: the first portion has a high stability and is less likely to change; the second portion has a low stability and is very likely to change. A UI designer might choose to show only high stabilityresults
. -
The specific
stability
andconfidence
values shown above are only for illustrative purposes. Actual values may vary. -
In each response, only one of these fields will be set:
error
,speech_event_type
, or one or more (repeated)results
.
Fields | |
---|---|
error
|
If set, returns a |
results[]
|
This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one |
speech_event_type
|
Indicates the type of speech event. |
total_billed_time
|
When available, billed audio seconds for the stream. Set only if this is the last response in the stream. |
speech_adaptation_info
|
Provides information on adaptation behavior in response |
request_id
|
The ID associated with the request. This is a unique ID specific only to the given request. |
SpeechEventType
Indicates the type of speech event.
Enums | |
---|---|
SPEECH_EVENT_UNSPECIFIED
|
No speech event specified. |
END_OF_SINGLE_UTTERANCE
|
This event indicates that the server has detected the end of the user's speech utterance and expects no additional speech. Therefore, the server will not process additional audio (although it may subsequently return additional results). The client should stop sending additional audio data, half-close the gRPC connection, and wait for any additional results until the server closes the gRPC connection. This event is only sent if single_utterance
was set to true
, and is not used otherwise. |
SPEECH_ACTIVITY_BEGIN
|
This event indicates that the server has detected the beginning of human voice activity in the stream. This event can be returned multiple times if speech starts and stops repeatedly throughout the stream. This event is only sent if voice_activity_events
is set to true. |
SPEECH_ACTIVITY_END
|
This event indicates that the server has detected the end of human voice activity in the stream. This event can be returned multiple times if speech starts and stops repeatedly throughout the stream. This event is only sent if voice_activity_events
is set to true. |
SPEECH_ACTIVITY_TIMEOUT
|
This event indicates that the user-set timeout for speech activity begin or end has exceeded. Upon receiving this event, the client is expected to send a half close. Further audio will not be processed. |
TranscriptOutputConfig
Specifies an optional destination for the recognition results.
Union field output_type
.
output_type
can be only one of the following:
gcs_uri
string
Specifies a Cloud Storage URI for the recognition results. Must be specified in the format: gs://bucket_name/object_name
, and the bucket must already exist.
UpdateCustomClassRequest
Message sent by the client for the UpdateCustomClass
method.
custom_class
Required. The custom class to update.
The custom class's name
field is used to identify the custom class to be updated. Format:
projects/{project}/locations/{location}/customClasses/{custom_class}
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Authorization requires the following IAM
permission on the specified resource customClass
:
-
speech.customClasses.update
update_mask
The list of fields to be updated.
UpdatePhraseSetRequest
Message sent by the client for the UpdatePhraseSet
method.
phrase_set
Required. The phrase set to update.
The phrase set's name
field is used to identify the set to be updated. Format:
projects/{project}/locations/{location}/phraseSets/{phrase_set}
Speech-to-Text supports three locations: global
, us
(US North America), and eu
(Europe). If you are calling the speech.googleapis.com
endpoint, use the global
location. To specify a region, use a regional endpoint
with matching us
or eu
location value.
Authorization requires the following IAM
permission on the specified resource phraseSet
:
-
speech.phraseSets.update
update_mask
The list of fields to be updated.
WordInfo
Word-specific information for recognized words.
Fields | |
---|---|
start_time
|
Time offset relative to the beginning of the audio, and corresponding to the start of the spoken word. This field is only set if |
end_time
|
Time offset relative to the beginning of the audio, and corresponding to the end of the spoken word. This field is only set if |
word
|
The word corresponding to this set of information. |
confidence
|
The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is set only for the top alternative of a non-streaming result or, of a streaming result where |
speaker_tag
|
Output only. A distinct integer value is assigned for every speaker within the audio. This field specifies which one of those speakers was detected to have spoken this word. Value ranges from '1' to diarization_speaker_count. speaker_tag is set if enable_speaker_diarization = 'true' and only in the top alternative. |