Reference documentation and code samples for the Google Cloud Dialogflow V2 Client class InputAudioConfig.
Instructs the speech recognizer how to process the audio content.
Generated from protobuf message google.cloud.dialogflow.v2.InputAudioConfig
Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ audio_encoding
int
Required. Audio encoding of the audio content to process.
↳ sample_rate_hertz
int
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
↳ language_code
string
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
↳ enable_word_info
bool
If true
, Dialogflow returns SpeechWordInfo
in StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
↳ phrase_hints
array
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts , Dialogflow will treat the phrase_hints as a single additional SpeechContext .
↳ speech_contexts
array< Google\Cloud\Dialogflow\V2\SpeechContext
>
Context information to assist speech recognition. See the Cloud Speech documentation for more details.
↳ model
string
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
↳ model_variant
↳ single_utterance
bool
If false
(default), recognition does not cease until the client closes the stream. If true
, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.
↳ disable_no_speech_recognized_event
bool
Only used in Participants.AnalyzeContent
and Participants.StreamingAnalyzeContent
. If false
and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
getAudioEncoding
Required. Audio encoding of the audio content to process.
int
setAudioEncoding
Required. Audio encoding of the audio content to process.
var
int
$this
getSampleRateHertz
Required. Sample rate (in Hertz) of the audio content sent in the query.
Refer to Cloud Speech API documentation for more details.
int
setSampleRateHertz
Required. Sample rate (in Hertz) of the audio content sent in the query.
Refer to Cloud Speech API documentation for more details.
var
int
$this
getLanguageCode
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
string
setLanguageCode
Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
var
string
$this
getEnableWordInfo
If true
, Dialogflow returns SpeechWordInfo
in StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
bool
setEnableWordInfo
If true
, Dialogflow returns SpeechWordInfo
in StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
var
bool
$this
getPhraseHints
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts , Dialogflow will treat the phrase_hints as a single additional SpeechContext .
setPhraseHints
A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts , Dialogflow will treat the phrase_hints as a single additional SpeechContext .
var
string[]
$this
getSpeechContexts
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
setSpeechContexts
Context information to assist speech recognition.
See the Cloud Speech documentation for more details.
$this
getModel
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig.
If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
string
setModel
Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig.
If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details.
var
string
$this
getModelVariant
Which variant of the Speech model to use.
int
setModelVariant
Which variant of the Speech model to use.
var
int
$this
getSingleUtterance
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
bool
setSingleUtterance
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Note: When specified, InputAudioConfig.single_utterance takes precedence
over StreamingDetectIntentRequest.single_utterance.
var
bool
$this
getDisableNoSpeechRecognizedEvent
Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent .
If false
and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
bool
setDisableNoSpeechRecognizedEvent
Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent .
If false
and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED
event to Dialogflow agent.
var
bool
$this