Explicitly specified decoding parameters. Required if using headerless PCM audio (linear16, mulaw, alaw).
↳ model
string
Optional. Which model to use for recognition requests. Select the model best suited to your domain to get best results. Guidance for choosing which model to use can be found in theTranscription Models Documentationand the models supported in each region can be found in theTable Of Supported Models.
↳ language_codes
array
Optional. The language of the supplied audio as aBCP-47language tag. Language tags are normalized to BCP-47 before they are used eg "en-us" becomes "en-US". Supported languages for each model are listed in theTable of Supported Models. If additional languages are provided, recognition result will contain recognition in the most likely language detected. The recognition result will include the language tag of the language detected in the audio.
Optional. Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
Optional. The language of the supplied audio as aBCP-47language tag.
Language tags are normalized to BCP-47 before they are used eg "en-us"
becomes "en-US".
Supported languages for each model are listed in theTable of Supported
Models.
If additional languages are provided, recognition result will contain
recognition in the most likely language detected. The recognition result
will include the language tag of the language detected in the audio.
Optional. The language of the supplied audio as aBCP-47language tag.
Language tags are normalized to BCP-47 before they are used eg "en-us"
becomes "en-US".
Supported languages for each model are listed in theTable of Supported
Models.
If additional languages are provided, recognition result will contain
recognition in the most likely language detected. The recognition result
will include the language tag of the language detected in the audio.
Optional. Use transcription normalization to automatically replace parts of
the transcript with phrases of your choosing. For StreamingRecognize, this
normalization only applies to stable partial transcripts (stability > 0.8)
and final transcripts.
Optional. Use transcription normalization to automatically replace parts of
the transcript with phrases of your choosing. For StreamingRecognize, this
normalization only applies to stable partial transcripts (stability > 0.8)
and final transcripts.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Cloud Speech V2 Client - Class RecognitionConfig (2.1.1)\n\nVersion latestkeyboard_arrow_down\n\n- [2.1.1 (latest)](/php/docs/reference/cloud-speech/latest/V2.RecognitionConfig)\n- [2.1.0](/php/docs/reference/cloud-speech/2.1.0/V2.RecognitionConfig)\n- [2.0.1](/php/docs/reference/cloud-speech/2.0.1/V2.RecognitionConfig)\n- [1.20.1](/php/docs/reference/cloud-speech/1.20.1/V2.RecognitionConfig)\n- [1.19.2](/php/docs/reference/cloud-speech/1.19.2/V2.RecognitionConfig)\n- [1.18.3](/php/docs/reference/cloud-speech/1.18.3/V2.RecognitionConfig)\n- [1.16.0](/php/docs/reference/cloud-speech/1.16.0/V2.RecognitionConfig)\n- [1.15.0](/php/docs/reference/cloud-speech/1.15.0/V2.RecognitionConfig)\n- [1.14.3](/php/docs/reference/cloud-speech/1.14.3/V2.RecognitionConfig)\n- [1.13.1](/php/docs/reference/cloud-speech/1.13.1/V2.RecognitionConfig)\n- [1.12.0](/php/docs/reference/cloud-speech/1.12.0/V2.RecognitionConfig)\n- [1.11.2](/php/docs/reference/cloud-speech/1.11.2/V2.RecognitionConfig)\n- [1.10.0](/php/docs/reference/cloud-speech/1.10.0/V2.RecognitionConfig)\n- [1.9.1](/php/docs/reference/cloud-speech/1.9.1/V2.RecognitionConfig)\n- [1.8.0](/php/docs/reference/cloud-speech/1.8.0/V2.RecognitionConfig)\n- [1.7.0](/php/docs/reference/cloud-speech/1.7.0/V2.RecognitionConfig) \nReference documentation and code samples for the Cloud Speech V2 Client class RecognitionConfig.\n\nProvides information to the Recognizer that specifies how to process the\nrecognition request.\n\nGenerated from protobuf message `google.cloud.speech.v2.RecognitionConfig`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ Speech \\\\ V2\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getAutoDecodingConfig\n\nAutomatically detect decoding parameters.\n\nPreferred for supported formats.\n\n### hasAutoDecodingConfig\n\n### setAutoDecodingConfig\n\nAutomatically detect decoding parameters.\n\nPreferred for supported formats.\n\n### getExplicitDecodingConfig\n\nExplicitly specified decoding parameters.\n\nRequired if using headerless PCM audio (linear16, mulaw, alaw).\n\n### hasExplicitDecodingConfig\n\n### setExplicitDecodingConfig\n\nExplicitly specified decoding parameters.\n\nRequired if using headerless PCM audio (linear16, mulaw, alaw).\n\n### getModel\n\nOptional. Which model to use for recognition requests. Select the model\nbest suited to your domain to get best results.\n\nGuidance for choosing which model to use can be found in the [Transcription\nModels\nDocumentation](https://cloud.google.com/speech-to-text/v2/docs/transcription-model)\nand the models supported in each region can be found in the [Table Of\nSupported\nModels](https://cloud.google.com/speech-to-text/v2/docs/speech-to-text-supported-languages).\n\n### setModel\n\nOptional. Which model to use for recognition requests. Select the model\nbest suited to your domain to get best results.\n\nGuidance for choosing which model to use can be found in the [Transcription\nModels\nDocumentation](https://cloud.google.com/speech-to-text/v2/docs/transcription-model)\nand the models supported in each region can be found in the [Table Of\nSupported\nModels](https://cloud.google.com/speech-to-text/v2/docs/speech-to-text-supported-languages).\n\n### getLanguageCodes\n\nOptional. The language of the supplied audio as a\n[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.\n\nLanguage tags are normalized to BCP-47 before they are used eg \"en-us\"\nbecomes \"en-US\".\nSupported languages for each model are listed in the [Table of Supported\nModels](https://cloud.google.com/speech-to-text/v2/docs/speech-to-text-supported-languages).\nIf additional languages are provided, recognition result will contain\nrecognition in the most likely language detected. The recognition result\nwill include the language tag of the language detected in the audio.\n\n### setLanguageCodes\n\nOptional. The language of the supplied audio as a\n[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.\n\nLanguage tags are normalized to BCP-47 before they are used eg \"en-us\"\nbecomes \"en-US\".\nSupported languages for each model are listed in the [Table of Supported\nModels](https://cloud.google.com/speech-to-text/v2/docs/speech-to-text-supported-languages).\nIf additional languages are provided, recognition result will contain\nrecognition in the most likely language detected. The recognition result\nwill include the language tag of the language detected in the audio.\n\n### getFeatures\n\nSpeech recognition features to enable.\n\n### hasFeatures\n\n### clearFeatures\n\n### setFeatures\n\nSpeech recognition features to enable.\n\n### getAdaptation\n\nSpeech adaptation context that weights recognizer predictions for specific\nwords and phrases.\n\n### hasAdaptation\n\n### clearAdaptation\n\n### setAdaptation\n\nSpeech adaptation context that weights recognizer predictions for specific\nwords and phrases.\n\n### getTranscriptNormalization\n\nOptional. Use transcription normalization to automatically replace parts of\nthe transcript with phrases of your choosing. For StreamingRecognize, this\nnormalization only applies to stable partial transcripts (stability \\\u003e 0.8)\nand final transcripts.\n\n### hasTranscriptNormalization\n\n### clearTranscriptNormalization\n\n### setTranscriptNormalization\n\nOptional. Use transcription normalization to automatically replace parts of\nthe transcript with phrases of your choosing. For StreamingRecognize, this\nnormalization only applies to stable partial transcripts (stability \\\u003e 0.8)\nand final transcripts.\n\n### getTranslationConfig\n\nOptional. Optional configuration used to automatically run translation on\nthe given audio to the desired language for supported models.\n\n### hasTranslationConfig\n\n### clearTranslationConfig\n\n### setTranslationConfig\n\nOptional. Optional configuration used to automatically run translation on\nthe given audio to the desired language for supported models.\n\n### getDenoiserConfig\n\nOptional. Optional denoiser config. May not be supported for all models\nand may have no effect.\n\n### hasDenoiserConfig\n\n### clearDenoiserConfig\n\n### setDenoiserConfig\n\nOptional. Optional denoiser config. May not be supported for all models\nand may have no effect.\n\n### getDecodingConfig"]]