float Hint Boost. Positive value will increase the probability
that a specific phrase will be recognized over other similar
sounding phrases. The higher the boost, the higher the
chance of false positive recognition as well. Negative boost
values would correspond to anti-biasing. Anti-biasing is not
enabled, so negative boost will simply be ignored. Thoughboostcan accept a wide range of positive values, most
use cases are best served with values between 0 (exclusive)
and 20. We recommend using a binary search approach to
finding the optimal value for your use case as well as
adding phrases both with and without boost to your requests.
A phrases containing words and phrase "hints" so that the speech
recognition is more likely to recognize them. This can be used to
improve the accuracy for specific words and phrases, for example, if
specific commands are typically spoken by the user. This can also be
used to add additional words to the vocabulary of the recognizer.
Seeusage
limits <https://cloud.google.com/speech-to-text/quotas#content>__.
List items can also include pre-built or custom classes containing
groups of words that represent common concepts that occur in natural
language. For example, rather than providing a phrase hint for every
month of the year (e.g. "i was born in january", "i was born in
febuary", ...), use the pre-built$MONTHclass improves the
likelihood of correctly transcribing audio that includes months
(e.g. "i was born in $month"). To refer to pre-built classes, use
the class' symbol prepended with$e.g.$MONTH. To refer to
custom classes that were defined inline in the request, set the
class'scustom_class_idto a string unique to all class
resources and inline classes. Then use the class' id wrapped in
$\{...}e.g. "${my-months}". To refer to custom classes
resources, use the class' id wrapped in${}(e.g.${my-months}).
Speech-to-Text supports three locations:global,us(US
North America), andeu(Europe). If you are calling thespeech.googleapis.comendpoint, use thegloballocation. To
specify a region, use aregional
endpoint <https://cloud.google.com/speech-to-text/docs/endpoints>__
with matchingusoreulocation value.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Class PhraseSet (2.33.0)\n\nVersion latestkeyboard_arrow_down\n\n- [2.33.0 (latest)](/python/docs/reference/speech/latest/google.cloud.speech_v1.types.PhraseSet)\n- [2.32.0](/python/docs/reference/speech/2.32.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.31.0](/python/docs/reference/speech/2.31.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.30.0](/python/docs/reference/speech/2.30.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.29.0](/python/docs/reference/speech/2.29.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.28.1](/python/docs/reference/speech/2.28.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.27.0](/python/docs/reference/speech/2.27.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.26.1](/python/docs/reference/speech/2.26.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.25.1](/python/docs/reference/speech/2.25.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.24.1](/python/docs/reference/speech/2.24.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.23.0](/python/docs/reference/speech/2.23.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.22.0](/python/docs/reference/speech/2.22.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.21.1](/python/docs/reference/speech/2.21.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.20.1](/python/docs/reference/speech/2.20.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.19.0](/python/docs/reference/speech/2.19.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.18.0](/python/docs/reference/speech/2.18.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.17.3](/python/docs/reference/speech/2.17.3/google.cloud.speech_v1.types.PhraseSet)\n- [2.16.2](/python/docs/reference/speech/2.16.2/google.cloud.speech_v1.types.PhraseSet)\n- [2.15.1](/python/docs/reference/speech/2.15.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.14.1](/python/docs/reference/speech/2.14.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.13.1](/python/docs/reference/speech/2.13.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.12.0](/python/docs/reference/speech/2.12.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.11.1](/python/docs/reference/speech/2.11.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.10.0](/python/docs/reference/speech/2.10.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.9.3](/python/docs/reference/speech/2.9.3/google.cloud.speech_v1.types.PhraseSet)\n- [2.8.0](/python/docs/reference/speech/2.8.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.7.0](/python/docs/reference/speech/2.7.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.6.0](/python/docs/reference/speech/2.6.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.5.1](/python/docs/reference/speech/2.5.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.4.1](/python/docs/reference/speech/2.4.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.3.0](/python/docs/reference/speech/2.3.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.2.1](/python/docs/reference/speech/2.2.1/google.cloud.speech_v1.types.PhraseSet)\n- [2.1.0](/python/docs/reference/speech/2.1.0/google.cloud.speech_v1.types.PhraseSet)\n- [2.0.1](/python/docs/reference/speech/2.0.1/google.cloud.speech_v1.types.PhraseSet)\n- [1.3.4](/python/docs/reference/speech/1.3.4/google.cloud.speech_v1.types.PhraseSet)\n- [1.2.0](/python/docs/reference/speech/1.2.0/google.cloud.speech_v1.types.PhraseSet)\n- [1.1.0](/python/docs/reference/speech/1.1.0/google.cloud.speech_v1.types.PhraseSet) \n\n PhraseSet(mapping=None, *, ignore_unknown_fields=False, **kwargs)\n\nProvides \"hints\" to the speech recognizer to favor specific\nwords and phrases in the results.\n\nClasses\n-------\n\n### Phrase\n\n Phrase(mapping=None, *, ignore_unknown_fields=False, **kwargs)\n\nA phrases containing words and phrase \"hints\" so that the speech\nrecognition is more likely to recognize them. This can be used to\nimprove the accuracy for specific words and phrases, for example, if\nspecific commands are typically spoken by the user. This can also be\nused to add additional words to the vocabulary of the recognizer.\nSee `usage\nlimits \u003chttps://cloud.google.com/speech-to-text/quotas#content\u003e`__.\n\nList items can also include pre-built or custom classes containing\ngroups of words that represent common concepts that occur in natural\nlanguage. For example, rather than providing a phrase hint for every\nmonth of the year (e.g. \"i was born in january\", \"i was born in\nfebuary\", ...), use the pre-built `$MONTH` class improves the\nlikelihood of correctly transcribing audio that includes months\n(e.g. \"i was born in $month\"). To refer to pre-built classes, use\nthe class' symbol prepended with `$` e.g. `$MONTH`. To refer to\ncustom classes that were defined inline in the request, set the\nclass's `custom_class_id` to a string unique to all class\nresources and inline classes. Then use the class' id wrapped in\n$\\\\ `{...}` e.g. \"${my-months}\". To refer to custom classes\nresources, use the class' id wrapped in `${}` (e.g.\n`${my-months}`).\n\nSpeech-to-Text supports three locations: `global`, `us` (US\nNorth America), and `eu` (Europe). If you are calling the\n`speech.googleapis.com` endpoint, use the `global` location. To\nspecify a region, use a `regional\nendpoint \u003chttps://cloud.google.com/speech-to-text/docs/endpoints\u003e`__\nwith matching `us` or `eu` location value."]]