Optional. Iftrue, Dialogflow returnsSpeechWordInfoinStreamingRecognitionResultwith information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
↳ phrase_hints
array
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. Seethe Cloud Speech documentationfor more details.
↳ model
string
Optional. Which Speech model to select for the given request. For more information, seeSpeech models.
Optional. Iffalse(default), recognition does not cease until the client closes the stream. Iftrue, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods.
Configuration of barge-in behavior during the streaming of input audio.
↳ opt_out_conformer_model_migration
bool
Iftrue, the request will opt out for STT conformer model migration. This field will be deprecated once force migration takes place in June 2024. Please refer toDialogflow CX Speech model migration.
getAudioEncoding
Required. Audio encoding of the audio content to process.
Returns
Type
Description
int
setAudioEncoding
Required. Audio encoding of the audio content to process.
Parameter
Name
Description
var
int
Returns
Type
Description
$this
getSampleRateHertz
Sample rate (in Hertz) of the audio content sent in the query.
Optional. Iftrue, Dialogflow returnsSpeechWordInfoinStreamingRecognitionResultwith information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
Returns
Type
Description
bool
setEnableWordInfo
Optional. Iftrue, Dialogflow returnsSpeechWordInfoinStreamingRecognitionResultwith information about the recognized speech words, e.g. start and end time
offsets. If false or unspecified, Speech doesn't return any word-level
information.
Parameter
Name
Description
var
bool
Returns
Type
Description
$this
getPhraseHints
Optional. A list of strings containing words and phrases that the speech
recognizer should recognize with higher likelihood.
Optional. Iffalse(default), recognition does not cease until the
client closes the stream.
Iftrue, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Returns
Type
Description
bool
setSingleUtterance
Optional. Iffalse(default), recognition does not cease until the
client closes the stream.
Iftrue, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
Note: This setting is relevant only for streaming methods.
Parameter
Name
Description
var
bool
Returns
Type
Description
$this
getBargeInConfig
Configuration of barge-in behavior during the streaming of input audio.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Google Cloud Dialogflow Cx V3 Client - Class InputAudioConfig (0.8.1)\n\nVersion latestkeyboard_arrow_down\n\n- [0.8.1 (latest)](/php/docs/reference/cloud-dialogflow-cx/latest/V3.InputAudioConfig)\n- [0.8.0](/php/docs/reference/cloud-dialogflow-cx/0.8.0/V3.InputAudioConfig)\n- [0.7.2](/php/docs/reference/cloud-dialogflow-cx/0.7.2/V3.InputAudioConfig)\n- [0.6.0](/php/docs/reference/cloud-dialogflow-cx/0.6.0/V3.InputAudioConfig)\n- [0.5.2](/php/docs/reference/cloud-dialogflow-cx/0.5.2/V3.InputAudioConfig)\n- [0.4.1](/php/docs/reference/cloud-dialogflow-cx/0.4.1/V3.InputAudioConfig)\n- [0.3.4](/php/docs/reference/cloud-dialogflow-cx/0.3.4/V3.InputAudioConfig)\n- [0.2.1](/php/docs/reference/cloud-dialogflow-cx/0.2.1/V3.InputAudioConfig)\n- [0.1.1](/php/docs/reference/cloud-dialogflow-cx/0.1.1/V3.InputAudioConfig) \nReference documentation and code samples for the Google Cloud Dialogflow Cx V3 Client class InputAudioConfig.\n\nInstructs the speech recognizer on how to process the audio content.\n\nGenerated from protobuf message `google.cloud.dialogflow.cx.v3.InputAudioConfig`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ Dialogflow \\\\ Cx \\\\ V3\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getAudioEncoding\n\nRequired. Audio encoding of the audio content to process.\n\n### setAudioEncoding\n\nRequired. Audio encoding of the audio content to process.\n\n### getSampleRateHertz\n\nSample rate (in Hertz) of the audio content sent in the query.\n\nRefer to\n[Cloud Speech API\ndocumentation](https://cloud.google.com/speech-to-text/docs/basics) for\nmore details.\n\n### setSampleRateHertz\n\nSample rate (in Hertz) of the audio content sent in the query.\n\nRefer to\n[Cloud Speech API\ndocumentation](https://cloud.google.com/speech-to-text/docs/basics) for\nmore details.\n\n### getEnableWordInfo\n\nOptional. If `true`, Dialogflow returns\n[SpeechWordInfo](/php/docs/reference/cloud-dialogflow-cx/latest/V3.SpeechWordInfo) in\n[StreamingRecognitionResult](/php/docs/reference/cloud-dialogflow-cx/latest/V3.StreamingRecognitionResult)\nwith information about the recognized speech words, e.g. start and end time\noffsets. If false or unspecified, Speech doesn't return any word-level\ninformation.\n\n### setEnableWordInfo\n\nOptional. If `true`, Dialogflow returns\n[SpeechWordInfo](/php/docs/reference/cloud-dialogflow-cx/latest/V3.SpeechWordInfo) in\n[StreamingRecognitionResult](/php/docs/reference/cloud-dialogflow-cx/latest/V3.StreamingRecognitionResult)\nwith information about the recognized speech words, e.g. start and end time\noffsets. If false or unspecified, Speech doesn't return any word-level\ninformation.\n\n### getPhraseHints\n\nOptional. A list of strings containing words and phrases that the speech\nrecognizer should recognize with higher likelihood.\n\nSee [the Cloud Speech\ndocumentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)\nfor more details.\n\n### setPhraseHints\n\nOptional. A list of strings containing words and phrases that the speech\nrecognizer should recognize with higher likelihood.\n\nSee [the Cloud Speech\ndocumentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)\nfor more details.\n\n### getModel\n\nOptional. Which Speech model to select for the given request.\n\nFor more information, see\n[Speech\nmodels](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models).\n\n### setModel\n\nOptional. Which Speech model to select for the given request.\n\nFor more information, see\n[Speech\nmodels](https://cloud.google.com/dialogflow/cx/docs/concept/speech-models).\n\n### getModelVariant\n\nOptional. Which variant of the [Speech\nmodel](/php/docs/reference/cloud-dialogflow-cx/latest/V3.InputAudioConfig#_Google_Cloud_Dialogflow_Cx_V3_InputAudioConfig__getModel__) to use.\n\n### setModelVariant\n\nOptional. Which variant of the [Speech\nmodel](/php/docs/reference/cloud-dialogflow-cx/latest/V3.InputAudioConfig#_Google_Cloud_Dialogflow_Cx_V3_InputAudioConfig__getModel__) to use.\n\n### getSingleUtterance\n\nOptional. If `false` (default), recognition does not cease until the\nclient closes the stream.\n\nIf `true`, the recognizer will detect a single spoken utterance in input\naudio. Recognition ceases when it detects the audio's voice has\nstopped or paused. In this case, once a detected intent is received, the\nclient should close the stream and start a new request with a new stream as\nneeded.\nNote: This setting is relevant only for streaming methods.\n\n### setSingleUtterance\n\nOptional. If `false` (default), recognition does not cease until the\nclient closes the stream.\n\nIf `true`, the recognizer will detect a single spoken utterance in input\naudio. Recognition ceases when it detects the audio's voice has\nstopped or paused. In this case, once a detected intent is received, the\nclient should close the stream and start a new request with a new stream as\nneeded.\nNote: This setting is relevant only for streaming methods.\n\n### getBargeInConfig\n\nConfiguration of barge-in behavior during the streaming of input audio.\n\n### hasBargeInConfig\n\n### clearBargeInConfig\n\n### setBargeInConfig\n\nConfiguration of barge-in behavior during the streaming of input audio.\n\n### getOptOutConformerModelMigration\n\nIf `true`, the request will opt out for STT conformer model migration.\n\nThis field will be deprecated once force migration takes place in June\n\n1. Please refer to [Dialogflow CX Speech model\n migration](https://cloud.google.com/dialogflow/cx/docs/concept/speech-model-migration).\n\n### setOptOutConformerModelMigration\n\nIf `true`, the request will opt out for STT conformer model migration.\n\nThis field will be deprecated once force migration takes place in June\n\n1. Please refer to [Dialogflow CX Speech model\n migration](https://cloud.google.com/dialogflow/cx/docs/concept/speech-model-migration)."]]