Reference documentation and code samples for the Google Cloud Dialogflow Cx V3 Client class InputAudioConfig.
Instructs the speech recognizer on how to process the audio content.
Generated from protobuf message google.cloud.dialogflow.cx.v3.InputAudioConfig
Namespace
Google \ Cloud \ Dialogflow \ Cx \ V3Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ audio_encoding
int
Required. Audio encoding of the audio content to process.
↳ sample_rate_hertz
int
Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
↳ enable_word_info
bool
Optional. If true
, Dialogflow returns SpeechWordInfo
in StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
↳ phrase_hints
array
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details.
↳ model
string
Optional. Which Speech model to select for the given request. For more information, see Speech models .
↳ model_variant
↳ single_utterance
bool
Optional. If false
(default), recognition does not cease until the client closes the stream. If true
, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods.
↳ barge_in_config
↳ opt_out_conformer_model_migration
bool
If true
, the request will opt out for STT conformer model migration. This field will be deprecated once force migration takes place in June 2024. Please refer to Dialogflow CX Speech model migration
.
getAudioEncoding
Required. Audio encoding of the audio content to process.
int
setAudioEncoding
Required. Audio encoding of the audio content to process.
var
int
$this
getSampleRateHertz
Sample rate (in Hertz) of the audio content sent in the query.
Refer to Cloud Speech API documentation for more details.
int
setSampleRateHertz
Sample rate (in Hertz) of the audio content sent in the query.
Refer to Cloud Speech API documentation for more details.
var
int
$this
getEnableWordInfo
Optional. If true
, Dialogflow returns SpeechWordInfo
in StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
bool
setEnableWordInfo
Optional. If true
, Dialogflow returns SpeechWordInfo
in StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
var
bool
$this
getPhraseHints
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
setPhraseHints
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood.
See the Cloud Speech documentation for more details.
var
string[]
$this
getModel
Optional. Which Speech model to select for the given request.
For more information, see Speech models .
string
setModel
Optional. Which Speech model to select for the given request.
For more information, see Speech models .
var
string
$this
getModelVariant
Optional. Which variant of the Speech model to use.
int
setModelVariant
Optional. Which variant of the Speech model to use.
var
int
$this
getSingleUtterance
Optional. If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
bool
setSingleUtterance
Optional. If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
var
bool
$this
getBargeInConfig
Configuration of barge-in behavior during the streaming of input audio.
hasBargeInConfig
clearBargeInConfig
setBargeInConfig
Configuration of barge-in behavior during the streaming of input audio.
$this
getOptOutConformerModelMigration
If true
, the request will opt out for STT conformer model migration.
This field will be deprecated once force migration takes place in June
- Please refer to Dialogflow CX Speech model migration .
bool
setOptOutConformerModelMigration
If true
, the request will opt out for STT conformer model migration.
This field will be deprecated once force migration takes place in June
- Please refer to Dialogflow CX Speech model migration .
var
bool
$this